Currently, three South Korean medical institutions – Gachon University Gil Medical Center, Pusan National University Hospital and Konyang University Hospital – have implemented IBM’s Watson for Oncology artificial intelligence (AI) system. As IBM touts the Watson for Oncology AI’s to ‘[i]dentify, evaluate and compare treatment options’ by understanding the longitudinal medical record and applying its training to each unique patient, questions regarding the status and liability of these AI machines have arisen.
Given its ability to interpret data and present treatment options (along with relevant justifications), AI represents an interim step between a diagnostic tool and colleague in medical settings. Using philosophical and legal concepts, this article explores whether AI’s ability to adapt and learn means that it has the capacity to reason and whether this means that AI should be considered a legal person.
Through this exploration, the authors conclude that medical AI such as Watson for Oncology should be given a unique legal status akin to personhood to reflect its current and potential role in the medical decision-making process. They analogize the role of IBM’s AI to those of medical residents and argue that liability for wrongful diagnoses should be generally based on a medical malpractice basis rather than through products liability or vicarious liability. Finally, they differentiate medical AI from AI used in other products, such as self-driving cars.
Chung, Jason and Zink, Amanda, Hey Watson, Can I Sue You for Malpractice? Examining the Liability of Artificial Intelligence in Medicine (November 23, 2017). Forthcoming, Asia-Pacific Journal of Health Law, Policy and Ethics.