The following is a summary of Do Algorithms Rule the World? Algorithmic Decision-Making and Data Protection in the Framework of the GDPR and Beyond by Dr. Maja Brkan. This paper was presented at the Technology Policy Institute Conference on The Economics and Policy Implications of Artificial Intelligence, February 22, 2018.
Every second of every day, algorithms make decisions. They may be determining credit scores, tailoring advertising to a target demographic, or undertaking any number of tasks that turn information into outputs. As technology advances, algorithms and automated reasoning become more powerful decision tools and are incorporated into more and more operations.
In her paper Do Algorithms Rule the World? Algorithmic Decision-Making and Data Protection in the Framework of the GDPR and Beyond, Maja Brkan of the Netherlands’ Maastricht University, asserts that these advances in technology are close to triggering a fourth industrial revolution. Artificial intelligence (AI), she argues will “change the way our society functions and how humans relate to each other, to alter the job market and job demands.” [1]
Brkan tempers the potential of AI and algorithmic reasoning by noting the ethical and legal implications that accompany them, particularly in the areas of privacy, data protection, and cybersecurity. She points to the European Union’s most recent answer to these questions, the General Data Protection Regulation (GDPR), as an example of baseline “rules of the road” for big data, and examines how it applies to automated decision-making for individuals, She argues in favor of additional measures to increase transparency and accountability in AI-individual interactions.
“Automated individual decision-making” is the process of making “decision[s] based solely on automated processing” but facilitated and interpreted by humans.[2] Brkan notes the level of inherent risk in pure automated decision-making differs based on the type of decision. When an algorithm chooses to present an advertisement for a product that does not interest you, you may spend a moment being annoyed and may have missed out on a product you would have liked. But no lasting harm has been done. Whether an algorithm approves or denies a mortgage, however, can have long-lasting and large effects—i.e., “binding” effects.
Article 22 of the GDPR gives individuals the right not to be subject to a decision based solely on automated processing and prohibits automated decision-making that produces binding effects. It gives people the right to human intervention in certain circumstances, but Brkan notes that these “rights” do not apply in all cases. For example, groups can be subjected to automated decision-making if the decision is necessary for performance of a contract or if authorized by a union or member state.
Brkan argues in favor of safeguards in cases of automated decision-making for individuals and groups, to ensure transparency and to hold decision-makers accountable. She suggests that individuals should have the right to an explanation of the decision-making process or the logic underlying the algorithm, particularly when decisions are binding. Though the GDPR provides a legal framework for data protection and cybersecurity, it does not directly address AI, big data, or automated decision-making. As technologies advance and automated decision-making becomes more and more prevalent, Brkan argues that we need “rules of the road” to ensure privacy, transparency, and accountability.
[1] Brkan, 2016. p. 1.
[2] General Data Protection Regulation (EU 2016/679), Article 1.