Mihailis Diamantis, ‘Vicarious Liability for AI’

ABSTRACT
When an algorithm harms someone – say by discriminating against her, exposing her personal data, or buying her stock using inside information – who should pay? If that harm is criminal, who deserves punishment? In ordinary cases, when A harms B, the first step in the liability analysis turns on what sort of thing A is. If A is a natural phenomenon, like a typhoon or mudslide, B pays, and no one is punished. If A is a person, then A might be liable for damages and sanction. The trouble with algorithms is that neither paradigm fits. Algorithms are trainable artifacts with ‘off’ switches, not natural phenomena. They are not people either, as a matter of law or metaphysics.

An appealing way out of this dilemma would start by complicating the standard A-harms-B scenario. It would recognize that a third party, C, is usually lurking nearby when an algorithm causes harm, and that third party is a person (legal or natural). By holding third parties vicariously accountable for what their algorithms do, the law could promote efficient incentives for people who develop or deploy algorithms and secure just outcomes for victims …

Diamantis, Mihailis, Vicarious Liability for AI (May 20, 2021) in Cambridge Handbook of AI and Law (Kristin Johnson and Carla Reyes eds, forthcoming 2022).

(Visited 46 times, 1 visits today)

Leave a Reply