This Essay explores whether health and medical AI should be regulated more like doctors than like devices, and what difference it would make. It concludes that although the FDA is poised to heavily regulate AI with demanding premarket testing standards out of concern for public safety, the risks posed from medical AI should be managed by comparing their performance to the costs and error of their nearest substitutes: doctors. AI will out-perform doctors in diagnosis and treatment management; indeed, it already does in some areas. Thus, the public safety concerns that are at the heart of medical device regulations are going to be less relevant in the context of medical AI than some of the other, more ancillary duties that doctors usually owe to their patients and to society: duties to provide confidentiality, to warn, to provide informed consent, and to avoid conflicts of interest. In most cases, treating robots like doctors rather than machines reveals a flaw in the assumptions and fundamental goals of our longstanding rules of professional conduct. This case study can teach us something about future-proofing law: while most legal scholars have focused on adjustments to the law in order to optimize our future robots, it is just as plausible that robots will help us adjust and optimize our aging laws.
Bambauer, Jane R, Dr Robot (December 13, 2017). 51 UC Davis Law Review 101 (2017 forthcoming); Arizona Legal Studies Discussion Paper No 17-28.