*The Research Roundup is a semi-regular list of outside research we have found interesting and think is worth sharing. The views and conclusions of the papers’ authors do not necessarily reflect the opinions of anyone affiliated with TPI.*
This month’s articles detail a growing trend: the application of artificial intelligence (AI) and big data in public management and urban governance. In his work, Where and When AI and CI Meet, Stefaan G. Verhulst discusses the opportunities and challenges that arise when new and emerging technologies like AI are deployed in pursuit of public goals. He notes that, alone, AI or collective intelligence (CI) – the knowledge, norms, and practices that arise out of collaboration and competition – can each come up short; that biases in algorithms or issues of trust or collaboration can limit the potential gains of either in the context of public governance. Together, however, Verhulst argues that they can thrive. An infusion of CI can humanize AI, while applications of AI can streamline and scale CI. Verhulst’s work adds to the literature on the relationship between human judgment and AI prediction that was the focus of TPI’s recent conference, “Terminator or the Jetsons? The Economics and Policy Implications of Artificial Intelligence. The real-world examples included in this piece demonstrate how the use of one can enhance the other as we move toward a more connected and collaborative form of governance. Click through for more detailed descriptions of this and other work on the role of AI in smart cities.
Descriptions of papers below are edited abstracts from authors
This paper seeks to explore the intersection of Artificial Intelligence (AI) and Collective Intelligence (CI), within the context of innovating how we govern. It starts from the premise that advances in technology provide policymakers with two important new assets: data and connected people. The application of AI and CI allows them to leverage these assets toward solving public problems. Yet both AI and CI have serious challenges that may limit their value within a governance context, including biases embedded in datasets and algorithms, undermining trust in AI, and high transaction costs to manage people’s engagement limiting CI to scale. The main argument of this paper is that some of the challenges of AI and CI can, in fact, be addressed through more interaction between CI and AI. In particular, the paper argues for augmented Collective Intelligence where AI may enable CI to scale, and human-driven Artificial Intelligence where CI may humanize AI. Several real-world examples are provided throughout the paper to illustrate emerging trends toward both types of intelligence and their applications to solve public problems or make policy decision differently.
Mohamed Razaghi & Matthias Finger
This paper argues that the mainstream urban governance approaches are built upon the legacy of reductionist doctrine and public administration tools which are not fully compatible with the complex nature of urban infrastructure systems. Recent technological innovations associated with smart cities and emerging sociopolitical trends are opening up new opportunities for governance approaches to overcome such incompatibilities. On the other hand, successful introduction of innovations in urban infrastructures, which we understand as complex socio-technical systems, requires smarter governance approaches that are compatible with systems paradigm. The pace of change in social and technological landscapes of cities is fast. This conceptual paper brings together insights from sociotechnical systems, systems theory, and governance literature to shed light on why city administrations should closely follow these changes and adapt the governance approaches accordingly; or governance may become a hindrance for utilizing the benefits of technology to deal with increasingly complex urban challenges.
Corinne Cath, Sandra Wachter, Brent Mittelstadt, Mariarosaria Taddeo, and Luciano Floridi
In October 2016, the White House, the European Parliament, and the UK House of Commons each issued a report outlining their visions on how to prepare society for the widespread use of artificial intelligence (AI). In this article, we provide a comparative assessment of these three reports in order to facilitate the design of policies favorable to the development of a ‘good AI society’. To do so, we examine how each report addresses the following three topics: (a) the development of a ‘good AI society’; (b) the role and responsibility of the government, the private sector, and the research community (including academia) in pursuing such a development; and (c) where the recommendations to support such a development may be in need of improvement. Our analysis concludes that the reports address adequately various ethical, social, and economic topics, but come short of providing an overarching political vision and long-term strategy for the development of a ‘good AI society’. In order to fill this gap, in the conclusion, we suggest a two-pronged approach.
Iain M. Cockburn, Rebecca M. Henderson, and Scott Stern
Artificial intelligence may greatly increase the efficiency of the existing economy. But it may have an even larger impact by serving as a new general-purpose “method of invention” that can reshape the nature of the innovation process and the organization of R&D. We distinguish between automation-oriented applications such as robotics and the potential for recent developments in “deep learning” to serve as a general-purpose method of the invention, finding strong evidence of a “shift” in the importance of application-oriented learning research since 2009. We suggest that this is likely to lead to a significant substitution away from more routinized labor-intensive research towards research that takes advantage of the interplay between passively generated large datasets and enhanced prediction algorithms. At the same time, the potential commercial rewards from mastering this mode of research are likely to usher in a period of racing, driven by powerful incentives for individual companies to acquire and control critical large datasets and application-specific algorithms. We suggest that policies which encourage transparency and sharing of core datasets across both public and private actors may be critical tools for stimulating research productivity and innovation-oriented competition going forward.
Sara Brorstrom, Daniela Argento, Giuseppe Grossi, Anna Thomasson, and Roland Almqvist
This paper shows how sustainable and smart strategies can be implemented in cities and how these strategies influence, and are influenced by, performance measurement systems. Drawing upon the Foucauldian notion of governmentality, the authors present the case of Gothenburg in Sweden, where they interviewed the key actors involved in a new sustainability strategy. Translating strategy into performance measurement systems requires collaboration across organizational boundaries and considerations of financial goals and social and human aspects.
Hazel Si Min Lim & Araz Taeihagh
Amidst rapid urban development, sustainable transportation solutions are required to meet the increasing demands for mobility whilst mitigating the potentially negative social, economic, and environmental impacts. This study analyses autonomous vehicles (AVs) as a potential transportation solution for smart and sustainable development. We identified privacy and cybersecurity risks of AVs as crucial to the development of smart and sustainable cities and examined the steps taken by governments around the world to address these risks. We highlight the literature that supports why AVs are essential for smart and sustainable development. We then identify the aspects of privacy and cybersecurity in AVs that are important for smart and sustainable development. Overall, the actions taken by governments to address privacy risks are mainly in the form of regulations or voluntary guidelines. To address cybersecurity risks, governments have mostly resorted to regulations that are not specific to AVs and are conducting research and fostering research collaborations with the private sector.
Alexa K. Fox & Marla B. Royne
As companies connect with consumers on social media, privacy becomes a significant area of concern. This research assesses consumers’ understanding of social media privacy policies (CUSPP) and fear related to those policies. Study one develops a scale to measure CUSPP. Study two examines the influence of text, audio, and pictorial cues used in social media privacy policies, on consumers’ CUSPP and physiologically-measured fear. Results suggest presentational cues affect CUSPP and fear of social media privacy policies. This research is among the first to use self-report and physiological measures to assess consumer understanding and emotional reactions in a social media context.
Daniel Greene & Katie Shilton
Mobile application design can have a tremendous impact on consumer privacy. But how do mobile developers learn what constitutes privacy? We analyze discussions about privacy on two major developer forums: one for iOS and one for Android. We find that the different platforms produce markedly different definitions of privacy. For iOS developers, Apple is a gatekeeper, controlling market access. The meaning of “privacy” shifts as developers try to interpret Apple’s policy guidance. For Android developers, Google is one data-collecting adversary among many. Privacy becomes a set of defensive features through which developers respond to a data-driven economy’s unequal distribution of power. By focusing on the development cultures arising from each platform, we highlight the power differentials inherent in “privacy by design” approaches, illustrating the role of platforms not only as intermediaries for privacy-sensitive content but also as regulators who help define what privacy is and how it works.