Journal

California Law Review

Volume

110

First Page

2087

Last Page

2147

Abstract

Decades ago, it was difficult to imagine a reality in which artificial intelligence (AI) could penetrate every corner of our lives to monitor our innermost selves for commercial interests. Within just a few decades, the private sector has seen a wild proliferation of AI systems, many of which are more powerful and penetrating than anticipated. In many cases, AI systems have become “the power behind the throne,” tracking user activities and making fateful decisions through predictive analysis of personal information. Despite the growing power of AI, proprietary algorithmic systems can be technically complex, legally claimed as trade secrets, and managerially invisible to outsiders, creating an opacity that hinders oversight of AI systems. Accordingly, many AI-based services and products have been found to be invasive, manipulative, and biased, eroding data privacy rules, human rights, and democratic norms in modern society.

The emergence of AI systems has thus generated a deep tension between algorithmic secrecy and data privacy. Yet, in today’s policy debate, algorithmic transparency in a privacy context is an equally important issue that is nonetheless managerially disregarded, commercially evasive, and legally unactualized. This Note is the first to illustrate how regulators should rethink strategies regarding data privacy through the interplay of human rights, algorithmic disclosures, and whistleblowing systems. As the world increasingly looks to the European Union’s (EU) data protection law—the General Data Protection Regulation (GDPR)—as a regulatory frame of reference, this piece assesses the effectiveness of the GDPR’s response to data protection issues raised by opaque AI systems. Based on a case study of Google’s AI applications and privacy disclosures, this piece demonstrates that even the EU fails to enforce data protection rules to address issues caused by algorithmic opacity.

This Note argues that as algorithmic opacity has become a primary barrier to oversight and enforcement, regulators in the EU, the United States, and elsewhere should not overprotect the secrecy of every aspect of AI applications that implicate public concerns. Rather, policymakers should consider imposing a duty of algorithmic disclosures through sustainability reporting and whistleblower protection on firms deploying AI to maximize effective enforcement of data privacy laws, human rights, and other democratic values.

Share

COinS
 
 

To view the content in your browser, please download Adobe Reader or, alternately,
you may Download the file to your hard drive.

NOTE: The latest versions of Adobe Reader do not support viewing PDF files within Firefox on Mac OS and if you are using a modern (Intel) Mac, there is no official plugin for viewing PDF files within the browser window.