Bryan Choi, ‘AI Malpractice’

ABSTRACT
Should AI modelers be held to a professional standard of care? Recent scholarship has argued that those who build AI systems owe special duties to the public to promote values such as safety, fairness, transparency, and accountability. Yet, there is little agreement as to what the content of those duties should be. Nor is there a framework for how conflicting views should be resolved as a matter of law.

This Article builds on prior work applying professional malpractice law to conventional software development work, and extends it to AI work. The malpractice doctrine establishes an alternate standard of care – the customary care standard – that substitutes for the ordinary reasonable care standard. That substitution is needed in areas like medicine or law where the service is essential, the risk of harm is severe, and a uniform duty of care cannot be defined. The customary care standard offers a more flexible approach that tolerates a range of professional practices above a minimum expectation of competence. This approach is especially apt for occupations like software development where the science of the field is hotly contested or is rapidly evolving.

Although it is tempting to treat AI liability as a simple extension of software liability, there are key differences. First, AI work has not yet become essential to the social fabric the way software services have. The risk of underproviding AI services is less troublesome than it is for conventional professional services. Second, modern deep-learning AI techniques differ significantly from conventional software development practices, in ways that will likely facilitate greater convergence and uniformity in expert knowledge.

Those distinguishing features suggest that the law of AI liability will chart a different path than the law of software liability. For the immediate term, the interloper status of AI indicates a strict liability approach is most appropriate, given the other factors. In the longer term, as AI work becomes integrated into ordinary society, courts should expect to transition away from strict liability. For aspects that elude expert consensus and require exercise of discretionary judgment, courts should favor the professional malpractice standard. However, if there are broad swaths of AI work where experts can come to agreement on baseline standards, then courts can revert to the default of ordinary reasonable care.

Choi, Bryan H, AI Malpractice, 73 DePaul Law Review (2024).

Leave a Reply