fbpx

“Do Algorithms Rule the World? Data Privacy and the GDPR with Maja Brkan” (Two Think Minimum Podcast)

“Do Algorithms Rule the World? Data Privacy and the GDPR with Maja Brkan” (Two Think Minimum Podcast)

Two Think Minimum Podcast Transcript
Episode 005: “Do Algorithms Rule the World? Data Privacy and the GDPR with Maja Brkan”
Recorded on February 23, 2018

Chris: Hello, and welcome to another episode of TPI’s podcast Two Think Minimum. I’m Chris McGurn TPI’s Director of Communications. This week, we’re fortunate to be talking with Maja Brkan, who is an Assistant Professor in European Law on the faculty of law at Maastricht University in the Netherlands. Maja also has the distinction of being part of our AI conference which was held earlier this week. We will be discussing with her some of the issues that she covered and presented in her paper, which is entitled, Do Algorithms Rule the World? Algorithmic Decision Making and Data Protection in the Framework of the GDPR and Beyond. For those of you who are not academics, we will start by finding out what the GDPR is.  She will go into what she presented at the conference yesterday, as well as give some of her opinions on how the conference overall went. We are also going to be joined by Scott Wallsten, TPI’s President and Senior Fellow, as well as Sarah Oh, TPI’s Research Fellow. Without further ado, I hand it over to Sarah and Maja for a conversation that we hope you all enjoy.

Sarah: Thanks for coming Maja to Washington D.C. I enjoyed your presentation yesterday. Would you provide a top level summary of your research from yesterday.

Maja: First of all, thank you very much for inviting me and you did a great job in pronouncing my name and Maastricht University in the Netherlands. What I presented yesterday is the question related to automated decision making, or even better to say, algorithmic decision making. Algorithmic decision making is decision making based on algorithms. Algorithms, very simply speaking, is a set of steps to accomplish a certain task, but they can also make complex decisions, such as, for example, when you go to a bank they can estimate your credit’s record, credit score. Or when you go to insurance companies they can estimate your risk. Or they can also provide for targeted advertisements. When you surf the web, you book a room in New York, then suddenly you get a lot of advertisements for other rooms in New York. My research is focusing on legal issues of this algorithmic decision making and also the question of the right of explanation of citizens.

Sarah: Great. It’s a good time to ask more about the right of explanation and some definitions from the GDPR.[1]

Maja: First we should clarify what the GDPR means – it’s the General Data Protection Regulation which is the new data protection package in Europe, which is going to be applied as the 25th of May. In Europe, there is currently quite a lot of initiatives for companies to become compliant with the GDPR. With regard to the right to explanation more specifically, you should know that, this is a field which is quite a battlefield for different academics.

Academics disagree very much as to whether the GDPR introduces this right to explanation or not. I’m more on the side of the camp which says there should be a right to explanation even though it is not explicitly written down in the article. The importance of this right to explanation lies in the fact that people should understand why a certain decision was taken and to also be able to exercise their rights to contest the decision. They do have a right to contest the decision. These right can only be effective if they know why exactly the decision has been taken. Of course, there are a lot of obstacles to such a right to explanation. One of the main obstacles is the technological obstacle, the fact that you cannot really explain a complex algorithm, but further research is needed in this regard.

Sarah: Great. I do know the European privacy framework, that it’s more conservative, or more regulatory, than the U.S. framework. What kinds of burdens does that regulation place on businesses and new firms? What are the opposing views to the right of explanation? That it’s too costly? Or not well-defined?

Maja: Yes, that’s a very good question actually. The privacy package, I believe you said it correctly, can be seen as more conservative than in the U.S., because it does institute a very high level of protection of European citizens. Now, the burden that it places on the company is that it actually introduces a new concept and that’s a concept of risk assessment, as opposed to a rights-based assessment, which was previously applied in previous legislation. That means that all the companies when they process data, especially when they process sensitive data, such as data about race, religious beliefs, sexual orientation, they will have to make a risk assessment before they process that data, and especially this risk assessment would be necessary whenever they use this, algorithmic or automated decision making.

The problem with the GDPR, partially, is that it only applies to individual citizens. Many academics have been wondering what happens if you have a group of people. For example, you are trying to profile a group of people, living in a certain geographical area, which is more prone to criminal acts, and things like that. That is something where GDPR will still need to be complemented in the long run.

Chris: One question I have about the GDPR is, there was a two-year period where they were supposed to implement it. Have there been any hiccups, or has your research been modified in a way, to see how the process to implement GDPR has actually gone into effect?

Maja: Practically speaking, there have been a lot of hiccups because we are aware that many companies – especially small and medium enterprises – in Europe are not yet prepared for GDPR and that deadline of the 25th of May is fast approaching. There have been trainings offered with our university. Recently a European Centre on Privacy and Cybersecurity has been established. Our director is my colleague, Cosimo Monda. They are offering a lot of trainings for data protection officers which will ensure compliance with the GDPR within the company and big companies to the contrary to SMEs (small and medium-sized enterprises),[2] they have already introduced compliance plans. They are really trying to figure out which provisions they would have to comply with. I know that there have been files prepared for that at a high management level. I would say that for smaller enterprises there are still a lot of hiccups and a lot of problems in this regard.

Scott: Have you heard people worry about unintended consequences? For example, I was just on the phone with someone who is worried that the GDPR – they’re worried about its effect on piracy – in the sense that they’re concerned that they will no longer be able to query the ICANN’s WHOIS database to help identify music or video or other kind of pirates. It seems like this has lots of potential implications on many industries, maybe some good, some bad. Have you heard people worry about problems in specific industries?

Maja: Yes, there has been a lot of opposition from certain voices to the GDPR, because the GDPR, as I mentioned already, offers quite a high level of protection. That, of course, has a downside of what you mentioned, that if you protect everybody, even those who are illegitimately downloading online content, from the data protection perspective, you are less capable of trying to identify them and chase them.

From that perspective I would mention that there is another instrument that is quite relevant in Europe and that is the Network and Information Security Directive (NIS) directive.  That is a regulation which is quite important from the cybersecurity perspective.[3] I very much believe that data protection measures should go hand in hand with cyber security, which is also a very much rising trend in this field. Yes, there has been a lot of critics of the GDPR, especially from that perspective, but also from other viewpoints.

Scott: One of the things about the GDPR, it seems to me is that, it really puts a stark light on trying to balance people’s preferences with just straight out – economic growth.[4] We know data is a tool for innovation. Companies want more and more and more data. The more data they have, the more things they can do. The more data they have, the less privacy you have. And we know that Europeans tend to have a stronger preference for more privacy than do, Americans.  Do people in Europe – I know it’s wrong to call all Europeans the same – do they feel like that’s a fair trade off?  Like, they know that they’re giving up something, because they feel that their privacy is more important, something they want to protect, even though it comes with certain costs.

Maja: Yes, that’s definitely a prevalent opinion in Europe. The protection of privacy and data protection which are both fundamental rights in Europe does outweigh the potential, economical – the fact that economic growth will maybe be a little bit lower, because of that high protection, is not considered an issued for European citizens. However, what I should mention is that the current economy doesn’t only run on personal data, but also runs on a lot of non-personal data. In this regard, the European Union has issued now recently a lot of documents in the framework of its digital market strategy and also the midterm review of the digital market strategy in 2017.[5] There has been a proposal for a legal act which would enable free flow of data in order to boost that economy. EU, and its institutions as well, are aware of the economic opportunities that the data offers, but not necessarily to the detriment of European citizens.

Scott: On that side of it, it’s trying to apply the same notion of free trade of goods and services to intangibles, like data.

Maja: Yes, free trade and even, free flow of data.

Chris: We’re talking a lot about the GDPR and what it means. But to get back to your paper, and your research on the algorithms behind it that you presented that AI conference yesterday, I was just wondering how your research is showing that algorithms are dictating the collection and protection of this data. Where do you see the evolution of machine learning to protect the sort of data from a European context?

Maja: The issue with the algorithms with regards to data protection is that it is quite difficult to in-build data protection into the design the algorithm.  We know that Ann Cavoukian has coined the term “privacy by design” and with regard to complex algorithms, it’s quite difficult to implement that privacy by design within the algorithm.[6] I would say that algorithms can have very important social impact and social implications. For example, I know that Facebook is scraping data from Facebook accounts and trying to profile individuals, which would have a certain medical issue. I know that they have predicted even which kinds of people would be of a different sexual orientation, or would have psychological problems, and so on and so forth.  Algorithms can even have an impact on democracy.

We have seen issues with regard to recent elections in the U.S., algorithms have played a quite important role with predictions of the results of elections, and with, maybe it’s quite a strong word, manipulation, but influencing, the voters as to how to vote. The thing with algorithms, is that you can really profile a person and you can really target the person, as to his or her personal needs. You can impact his freedom of choice and his freedom of decision in a way. You can see that also sometimes I’m wondering whether all the products suddenly that you buy on the Internet are going to be customized or not.

Scott: But you’re saying as if it’s a bad thing. I mean isn’t it better for people to have products customized for them?

Maja: They’re are good side and bad side. Customization I believe is a good thing but whenever you are influencing your personal choices, such as for election choices that I find problematic, because I believe that advertising in this regard should be open to everyone, and everybody should have his or her personal choice. But sometimes, when you’re targeted really for a specific product, that’s not necessarily linked to you, this algorithms can also make wrong decisions. That’s what I’m worried about.

Scott: The elections seem to be a different, it’s different than selling you soap. Usually the way we deal with that here has been that you have to disclose that it’s a political advertisement in support of so and so. And that has not been applied to online platforms yet. That’s part of the debate here how to deal with that because we have First Amendment issues on how much you can control what people are saying online, or advertisers.  But we want people to know what the ad is truly about for fair elections. How are European countries dealing with that or are they?

Maja: That’s something that we have to see with regard to GDPR, how GDPR is going to be applied in that regard. So the GDPR does contain a provision which regulates algorithmic decision making which I spoke about yesterday. This famous Article 22 which is in a way if you read it quite restrictive because in principle it prohibits automatic decision making. However when you go a little bit closer into it, then you see that there are so many exceptions to that, that it looks like Swiss cheese.

In the end, I think that this Article looks very restrictive. The problem with it is, is that I don’t think it reflects the reality, because every time an automated decision is being taken, the person who is actually the target of the automated decision, has the right to object, and always has the right to human intervention. I’m always wondering, if I’m buying a flight ticket, which is priced on the basis of dynamic pricing, and I want to have a human intervention in that process, I just don’t see how that can happen.

Scott: It also seems like it would be incredibly inefficient. Ticket prices are different for all kinds of reasons.

Sarah: To go to Article 22, from my notes yesterday, I’ll just read it. Article 22 governs automated decision making, decisions “not based solely on automated processes.”[7] So, is that what you’re saying, that there is an appeal process?

Maja: Yes, those are two different things. One thing is that whenever a decision is taken on the basis of personal data from automatic processing, citizens have the right to human intervention, what we discussed just now. Human intervention should be provided in the most critical cases, and we should ensure that a human has a final decision, on the automated decision.

However, it is not always possible, if you have a question of what is less relevant like dynamic pricing, and obviously human intervention is going to be less important. And then there’s this other right, a right to object, which is not really a right to appeal, a right to object to the automated decision, which also empowers the European citizens in giving them the possibility of not agreeing with automated decisions.

Scott: But this has hold up in practice. I’m wondering how that’s going to happen.

Maja: Right, I am wondering how this is going to happen. Again we have to distinguish between automated decisions and automated decisions. If you use automatic decisions for diagnosing and then imagine that a machine learning algorithm suggests a particular treatment which would also have very strong side effects on the patient, then probably the patient should have the right to say, “Well, I don’t want that decision. I object to that decision. Can you, doctor, revise that. Can you check how this is going to be made.” But for everyday small profiling, that’s not going to be possible. That’s what I’m saying, this Article is not really realistic.

Scott: It does also seem to me that this fits with other things people were talking about yesterday. With AI, you get prediction but, at some stage, you still need judgment to know what to do with it.

Maja: Yes exactly. I think that’s a very important element. I think it would be a bad thing if humans always follow these decisions blindly, and do not revise them. I can understand that we might trust the machines more than we trust us. But there should be a mechanism of collaboration between machine learning decisions and human decisions, because somebody mentioned at the conference that, you know intuition sometimes, it’s quite an important element as well. I agree with that.

Scott: This is a really unfair question but, how are we going to make the decisions about where to draw that line? We already let machines make a lot of decisions for us.  Certain elevators at large buildings some systems you select the right floor to go to and it decides which one you take.  You let it decide the fastest way is to get to your floor. Or you type your destination into Waze that it tells you how to get there.[8]

Maja: Whenever a decision legally or significantly affects an individual that’s when we have to have this right. How do you draw the line between whenever a decision significantly affects somebody or not? I gave an example yesterday, if I get an online targeted advertisement to buy a car, and I followed that advertisement, and I actually buy car, does that significantly affect me?  Is a very debatable question. I wouldn’t say so, but then there are more important decisions, about medical treatments or other issues. Whenever a woman is pregnant and she’s trying to suddenly take certain decisions and she’s influenced by algorithmic decision making, that’s where you are more significantly affected.

Sarah: Related to privacy, the FTC has been holding workshops and research to measure injury from privacy harms.[9] It’s an active area of research because it’s hard to put a number on those kinds of injuries, like very small injuries by the small ad that you see, compared to very large injuries. Do you know about this literature?  How to measure “injury”?  Or you call it “impact”?

Maja: The measurement of this “impact” should not only be restricted to privacy. There is impact on other fundamental rights. For example, on a right to dignity, if a person with dementia works together with a robot, in order to improve his or her state that can have a considerable impact on human dignity. You also have other rights that can be affected, such as freedom of expression and other fundamental rights that are relevant. We shouldn’t limit ourselves just on privacy. I believe it’s a broader societal issue, where on one hand, we have to ensure its fundamental rights of citizens are respected. On the other hand, we have to ensure the possibility of technological development because technological development is necessary and is positive in general and will bring about a lot of economic innovation and economic growth. There’s always this balance that needs to be struck between one and the other.

Scott: We’re entering a new era of trying to measure non-market activities. We started doing that with the environment 30 years ago and more, now we have this whole other set of non-market activities that have value. But we don’t know we don’t know how to quantify them yet.

Chris: Do algorithm’s rule the world?  Bottom. Line.

Maja: That’s the question. It’s difficult to say with yes or no. But in a few years, maybe a decade, that could be the case. That’s why I’m always emphasizing how important that is that humans remain in control in the end. I’m not very pessimistic like in terms of Elon Musk’s view of how the world will be in a few years.

Artificial intelligence is very important and is actually the core development in the field of technology right now. But, I think that mechanisms should be in place, that access to wealth that is generated through that, is distributed in more or less equal way among people, and that might sound quite socialist, but I believe that we should not allow wealth that is generated through this analysis of data and algorithms is only kept in hands of elites. That’s very important.

Chris: I was shocked yesterday for a room full of economists, there was a lot of optimism too, for what AI holds for people. Any final thoughts you wanted to give on your paper, the conference yesterday, or topics in general, AI or otherwise?

Maja: I’d just like to say that, we are in exciting times where we are really on the verge of the fourth industrial revolution. I believe it’s very, very important what you’re doing right now here at the Technology Policy Institute is to inform ordinary citizens, and everybody who is interested in podcasts, about the implications of artificial intelligence, because I believe that people are not yet aware enough of those implications. I’d just like thank you for your good work in this regard as well.

Scott: Thank you for coming all the way over here, and presenting your paper, and talking with us today.

Chris: Thank you everybody. Now we’re going to have cupcakes because it’s Sarah’s birthday, we should have mentioned that at the top of the podcast. That’s why we have cupcakes today. Thank you all again for being here. We hope you tune in to this and all other future Two Think Minimum podcasts.

[1] GPDR, General Data Protection Regulation (2016). Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation), http://eur-lex.europa.eu/legal-content/en/TXT/?uri=CELEX%3A32016R0679.

[2] http://ec.europa.eu/growth/smes/business-friendly-environment/sme-definition_en

[3] https://ec.europa.eu/digital-single-market/en/network-and-information-security-nis-directive

[4] https://www.bloomberg.com/news/articles/2017-06-28/ai-seen-adding-15-7-trillion-as-game-changer-for-global-economy

[5] https://ec.europa.eu/digital-single-market/en/news/digital-single-market-mid-term-review

[6] GDPR, Article 25: “Data protection by design and by default. 1. Taking into account the state of the art, the cost of implementation and the nature, scope, context and purposes of processing as well as the risks of varying likelihood and severity for rights and freedoms of natural persons posed by the processing, the controller shall, both at the time of the determination of the means for processing and at the time of the processing itself, implement appropriate technical and organisational measures, such as pseudonymisation, which are designed to implement data-protection principles, such as data minimisation, in an effective manner and to integrate the necessary safeguards into the processing in order to meet the requirements of this Regulation and protect the rights of data subjects. 2. The controller shall implement appropriate technical and organisational measures for ensuring that, by default, only personal data which are necessary for each specific purpose of the processing are processed. That obligation applies to the amount of personal data collected, the extent of their processing, the period of their storage and their accessibility. In particular, such measures shall ensure that by default personal data are not made accessible without the individual’s intervention to an indefinite number of natural persons. 3. An approved certification mechanism pursuant to Article 42 may be used as an element to demonstrate compliance with the requirements set out in paragraphs 1 and 2 of this Article.”

[7] GDPR, Article 22: “Automated individual decision-making, including profiling. 1. The data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her. 2. Paragraph 1 shall not apply if the decision: (a) is necessary for entering into, or performance of, a contract between the data subject and a data controller; (b) is authorised by Union or Member State law to which the controller is subject and which also lays down suitable measures to safeguard the data subject’s rights and freedoms and legitimate interests; or (c) is based on the data subject’s explicit consent. 3. In the cases referred to in points (a) and (c) of paragraph 2, the data controller shall implement suitable measures to safeguard the data subject’s rights and freedoms and legitimate interests, at least the right to obtain human intervention on the part of the controller, to express his or her point of view and to contest the decision. 4. Decisions referred to in paragraph 2 shall not be based on special categories of personal data referred to in Article 9(1), unless point (a) or (g) of Article 9(2) applies and suitable measures to safeguard the data subject’s rights and freedoms and legitimate interests are in place.”

[8] https://techpolicyinstitute.org/wp-content/uploads/2018/02/Liu-Whinston-AutonomousVehicles.pdf

[9] https://www.ftc.gov/news-events/events-calendar/2018/02/privacycon-2018, https://www.ftc.gov/news-events/events-calendar/2017/12/informational-injury-workshop

Share This Article

Podcast

View More Publications by Sarah Oh Lam and Scott Wallsten

Recommended Reads

Related Articles

Sign Up for Updates

This field is for validation purposes and should be left unchanged.