This paper investigates the question of legal liability for the consequences of decisions made by machine learning technology rather than by humans, although we do not attempt a detailed analysis of the basis on which such liability might be imposed. This is a substantial task which would require far more space than is available here. The initial focus is on private claims for personal injury, property damage and other losses caused by use of machine learning technologies. These claims will usually be made via the tort of negligence. Equally importantly, we identify some of the threats to individual autonomy and fundamental rights which are created by the use of machine learning to make decisions. Breach of those fundamental rights is a second source of potential liability. We conclude by suggesting a potential link between liability and the preservation of those fundamental rights which might achieve an interim solution to this issue, making use of the concept of accountability and its transparency attribute in particular.
Reed, Chris and Kennedy, Elizabeth J and Silva, Sara Nogueira, Responsibility, Autonomy and Accountability: Legal Liability for Machine Learning (October 17, 2016). Queen Mary School of Law Legal Studies Research Paper No 243/2016.