Research Roundup: Cyber Security, Network Neutrality and More


This edition of Research Roundup highlights a paper by Amalia R. Miller and Catherine Tucker on the risks of publicized data breaches in the health sector. Miller and Tucker perform one of the first empirical analyses of the medical sector by looking at how hospitals have adopted encryption software over time. They find that “the use of encryption software does not reduce overall instances of publicized data loss. Instead, its installation is associated with an increase in the likelihood of publicized data loss due to fraud or loss of computer equipment.” (p 3) The authors speculate that focusing on encryption software may be to the detriment of implementing effective internal access controls and lead to employee carelessness. In other words, without human-based company processes that complement encryption’s effectiveness, the risks for data losses could increase with the software’s implementation.

(Click through to the full post to see the abstract and link to this paper and 11 others on topics from privacy to copyright policy)

Descriptions of papers below are edited abstracts from authors

Privacy and Cyber Security

Amalia R. Miller and Catherine Tucker

Fast-paced IT advances have made it increasingly possible and useful for firms to collect data on their customers on an unprecedented scale. One downside of this is that firms can experience negative publicity and financial damage if their data are breached. This is particularly the case in the medical sector. This paper presents empirical evidence that increased digitization of patient data is associated with more data breaches. The encryption of customer data is often presented as a potential solution, because encryption acts as a disincentive for potential malicious hackers, and can minimize the risk of breached data being put to malicious use. However, encryption both requires careful data management policies to be successful and does not ward off the insider threat. Indeed, the authors find no empirical evidence of a decrease in publicized instances of data loss associated with the use of encryption. Instead, there are actually increases in the cases of publicized data loss due to internal fraud or loss of computer equipment.

Avi Goldfarb and Catherine Tucker

Information and communication technology now enables firms to collect detailed and potentially intrusive data about their customers both easily and cheaply. This means that privacy concerns are no longer limited to government surveillance and public figures’ private lives. The empirical literature on privacy regulation shows that privacy regulation may affect the extent and direction of data-based innovation. The impact of privacy regulation can be extremely heterogeneous. Therefore, the authors argue, digitization means that privacy policy is now a part of innovation policy.

Jane Yakowitz

Accurate data is vital to enlightened research and policymaking, particularly publicly available data that are redacted to protect the identity of individuals. Legal academics, however, are campaigning against data anonymization as a means to protect privacy, contending that wealth of information available on the Internet enables malfeasors to reverse-engineer the data and identify individuals within them. This Article argues that properly de-identified data is not only safe, but of extraordinary social utility. It makes three core claims. First, legal scholars have misinterpreted the relevant literature, and thus have significantly overstated the futility of anonymizing data. Second, the available evidence demonstrates that the risks from anonymized data are theoretical – they rarely, if ever, materialize. Finally, anonymized data is crucial to beneficial social research, and constitutes a public resource under threat of depletion. The Article proposes that since current privacy policies overtax valuable research without reducing any realistic risks, law should provide a safe harbor for the dissemination of research data.

Online Advertising

Thani Jambulingam and Rajneesh Sharma

In 2009, the Food and Drug Administration (FDA) released warning letters to 14 major pharmaceutical companies about search engine advertising, effectively curtailing an aspect of internet marketing by pharmaceutical industry. This article presents an analysis of stock market reactions of pharmaceutical firms around the time of the FDA announcement, using both regular and abnormal returns. The authors analyze two groups of firms, those that received the warning letters and those that did not receive the letters. They find significantly negative stock market reaction for both groups of firms, suggesting that the letters had negative impact on shareholder’s value to the industry as a whole. The results indicate that internet marketing is important, and thus it is imperative that the industry works in tandem with the FDA to develop better guidelines on the appropriate use of the internet for the marketing of pharmaceuticals.

Search and Network Neutrality

Geoffrey A. Manne and Joshua D. Wright

This paper evaluates both the economic and non-economic costs and benefits of search bias. The authors demonstrate that search bias is the product of the competitive process, linking the search bias debate to the economic and empirical literature on vertical integration and the generally-efficient and pro-competitive incentives for a vertically integrated firm to discriminate in favor of its own content. Building upon this literature and its application to the search engine market, they conclude that neither an ex ante regulatory restriction on search engine bias nor the imposition of an antitrust duty to deal upon Google would benefit consumers. They find these non-economic justifications for restricting search engine bias unconvincing, and particularly susceptible to the well-known Nirvana Fallacy of comparing imperfect real world institutions with romanticized and unrealistic alternatives.

Eric Goldman

This essay takes stock of the search engine industry circa 2011 and analyzes how the emergence of Net Neutrality as a policy issue has spurred consideration of a “Search Neutrality” analogue. Given the huge economic stakes associated with the search engine industry, it has become almost impossible to distinguish legitimate discourse from economic rent seeking. Despite all of this competitive gunning for Google, there is still no strong evidence that (1) Google has used illegitimate practices to advance or maintain its industry dominance, or (2) consumers cannot or will not gravitate to the most effective online search tools available to them. Without such evidence, it remains equally plausible that the search engine marketplace continues to work as expected and searchers continue to vote with their mice. The author argues that clear justification is needed before regulators get involved.

Christopher T. Mardsen

The issue of uncontrolled Internet flows versus engineered solutions is central to the question of a ‘free’ versus regulated Internet. A consumer- and citizen-orientated intervention depends on passing regulations to prevent unregulated nontransparent controls exerted over traffic via DPI equipment, whether imposed by ISPs for financial advantage or by governments eager to use this new technology to filter, censor and enforce copyright against their citizens. Unraveling the previous ISP limited liability regime risks removing the efficiency of that approach in permitting the free flow of information for economic and social advantage. These conclusions support a light-touch regulatory regime involving reporting requirements and co-regulation with, as far as is possible, market-based solutions. Solutions may be international as well as local, and international coordination of best practice and knowledge will enable national regulators to keep up with the technology ‘arms race’.


Andrew Odlyzko, Papak Nabipav and Zhi-Li Zhang

The current push for bandwidth caps, tiered usage pricing, and other measures in both wireless and wireline communications is usually justified by invoking the specter of “bandwidth hogs” consuming an unfair share of the transmission capacity and being subsidized by the bulk of the users. This paper presents a conventional economic model of flat rates as a form of bundling, in which consumption can be extremely unequal. For a monopoly service provider with negligible marginal costs, flat rates turn out to maximize profits in most cases. The advantage of evening out the varying preferences for different services among users overcomes the disadvantage of the heaviest users consuming more than the average users.

Internet Trends

Christopher S. Yoo

The Internet unquestionably represents one of the most important technological developments in recent history. It has revolutionized the way people communicate with one another and obtain information and created an unimaginable variety of commercial and leisure activities. Interestingly, many members of the engineering community often observe that the current network is ill-suited to handle the demands that end users are placing on it. This Essay explores emerging trends that are transforming the way end users are using the Internet and examine their implications both for network architecture and public policy. These trends include Internet protocol video, wireless broadband, cloud computing programmable networking, and pervasive computing and sensor networks. It discusses how these changes in the way people are using the network may require the network to evolve in new directions.

Intellectual Property and Copyright

*The following three papers are related to the National Research Council (NRC) Committee project on the Impact of Copyright Policy on Innovation in the Digital Era.

Joel Waldfogel

In the decade since Napster, file-sharing has undermined the protection that copyright affords recorded music, reducing recorded music sales. What matters for consumers, however, is not sellers’ revenue but the surplus they derive from new music. The legal monopoly created by copyright is justified by its encouragement of the creation of new works, but there is little evidence on this relationship. The file-sharing era can be viewed as a large-scale experiment allowing us to check whether events since Napster have stemmed the flow of new works. The paper assembles a novel dataset on the number of high quality works released annually, since 1960, derived from retrospective critical assessments of music such best-of-the-decade lists. The paper finds no evidence that changes since Napster have affected the quantity of new recorded music or artists coming to market. The stable quantities are reconciled in the face of decreased demand with reduced costs of bringing works to market and a growing role of independent labels.

Lisa Cameron and Coleman Bazelon

This paper focuses on three key copyright-driven industries, music, film, and books.  In all three industries, digitization and the internet have led to a precipitous decline in distribution costs, as well as an enormous increase in piracy that has likely diminished the economic rewards afforded by copyright. In the music industry, the most successful business models to date have relied heavily on both legal enforcement and Digital Rights Management (DRM).  The author argues that in order to be effective in the long run, legal enforcement does not have to eliminate piracy altogether, but only make pirated content unappealing to enough consumers who are otherwise willing to pay for it. In the film industry, digitization has had its greatest impact in the post-theatrical release segment. Online distribution is becoming increasingly important at the same time that TVs are increasingly integrated with internet connectivity. This paper contends that in the near term studios will still likely maintain some control over the licensing and artistic output of the movie industry, but in the long term this relationship is likely to change. In contrast to the music and movie industries, traditional book retailers adapted to digitization by introducing e-reader devices that integrated DRM from the beginning. The author asserts that it is still not yet clear whether the increase in sales of e-books represent shifted sales of physical books or new sales driven by sales of e-readers that piqued consumer interest. The paper concludes with a series of research questions that could provide a better understanding of how these rapidly changing industries will evolve over time.

Mark J McCabe

Online access to the scientific literature has transformed the distribution of the scientific literature.  This literature is now easier to search and read, especially for the producers of new articles: the scientist authors affiliated with research institutions.  Unfortunately, the cost of supporting this enterprise has not declined.  Ironically, the same technologies that enable immediate access for readers also facilitate bundling and pricing policies by the major commercial publishers that exacerbate rather than alleviate the inflationary pricing trends of the pre-internet era.  Although open access journals have begun to proliferate, perhaps in response to publisher bundling, their long-term viability in lieu of subsidized author fees remains uncertain. One of the chief benefits of OA is supposed to be greater readership and impact (and this assumption is important in providing the economic justification for the OA business model). However, the evidence in support of this claim remains uncertain.  Although initial studies of this question revealed large positive benefits of online access (including open access), more recent papers on this subject have identified a series of data and econometric problems that when addressed eliminate most but not all of the presumed benefits.