Legal governance and regulation is becoming increasingly reliant on data collection and algorithmic data processing. In the area of copyright, on-line protection of digitized works is frequently mediated by algorithmic enforcement systems intended to purge illicit content and limit the liability of YouTube, Facebook, and other content platforms. But unauthorized content is not necessarily illicit content. Many unauthorized digital postings may claim legitimacy under statutory exceptions such as the legal balancing standard known as fair use. Such exceptions such exist to ameliorate the negative effects of copyright on public discourse, personal enrichment, and artistic creativity. Consequently, it may seem desirable to incorporate fair use metrics into copyright policing algorithms, both to protect against automated over-deterrence, and to inform users of their compliance with copyright law. In this paper I examine the prospects for algorithmic mediation of copyright exceptions, warning that the design values embedded in algorithms will inevitably become embedded in public behavior and consciousness. Thus, algorithmic fair use carries with it the very real possibility of habituating new media participants to its own biases, and so progressively altering the fair use standard it attempts to embody.
Burk, Dan L, Algorithmic Fair Use (November 22, 2017). University of Chicago Law Review, forthcoming.