Hacker, Krestel, Grundmann and Naumann, ‘Explainable AI under Contract and Tort Law: Legal Incentives and Technical Challenges’

ABSTRACT
This paper shows that the law, in subtle ways, may set hitherto unrecognized incentives for the adoption of explainable machine learning applications. In doing so, we make two novel contributions. First, on the legal side, we show that to avoid liability, professional actors, such as doctors and managers, may soon be legally compelled to use explainable ML models. We argue that the importance of explainability reaches far beyond data protection law, and crucially influences questions of contractual and tort liability for the use of ML models. To this effect, we conduct two legal case studies, in medical and corporate merger applications of ML. As a second contribution, we discuss the (legally required) trade-off between accuracy and explainability and demonstrate the effect in a technical case study in the context of spam classification.

Hacker, Philipp and Krestel, Ralf and Grundmann, Stefan and Naumann, Felix, Explainable AI under Contract and Tort Law: Legal Incentives and Technical Challenges (January 3, 2020). 28 Artificial Intelligence and Law (2020), forthcoming.

Leave a Reply