Barbara Evans, ‘Rules for Robots, and Why Medical AI Breaks Them’

ABSTRACT
This article critiques the quest to state general rules to protect human rights against AI/ML computational tools. The White House Blueprint for an Al Bill of Rights was a recent attempt that fails in ways this article explores. There are limits to how far ethicolegal analysis can go in abstracting AI/ML tools, as a category, from the specific contexts where Al tools are deployed. Health technology offers a good example of this principle. The salient dilemma with AI/ML medical software is that privacy policy has the potential to undermine distributional justice, forcing a choice between two competing visions of privacy protection. The first, stressing individual consent, won favor among bioethicists, information privacy theorists, and policymakers after 1970 but displays an ominous potential to bias Al training data in ways that promote health care inequities. The alternative, an older duty-based approach from medical privacy law aligns with a broader critique of how late-20th-century American law and ethics endorsed atomistic autonomy as the highest moral good, neglecting principles of caring, social interdependency, justice, and equity. Disregarding the context of such choices can produce suboptimal policies when – as in medicine and many other contexts – the use of personal data has high social value.

Evans, Barbara J, Rules for Robots, and Why Medical AI Breaks Them (2023), 10 Journal of Law and Biosciences 1 (2023).

Leave a Reply