Bryan H Choi, ‘AI Malpractice’, 73 DePaul Law Review 301 (2024). When a digital financial or medical advisor gives bad advice, when ChatGPT confabulates that a law professor committed sexual assault, when an autonomous weapon system takes action that looks like a war crime – who should be held liable? Bryan Choi’s excellent ‘AI Malpractice’ makes an important but often overlooked point: the answer isn’t as simple as choosing between negligence and various other potential regimes (strict liability, products liability, enterprise liability, etc). That’s an important first step, and for a host of reasons, I share Choi’s conclusion that strict liability is the preferable near-term standard. But as AI agents and decision-making technologies proliferate and judges consider the applicability of negligence, there is a critical second order question: In a negligence regime, what standard should be applied for evaluating if a duty was breached? … (more)
[Rebecca Crootof, JOTWELL, 10 September 2024]
Leave a Reply