Traditional tort law benefits consumers by holding accountable parties responsible for injury, encouraging greater care in manufacture, and, ultimately, making injured victims whole. Whether traditional notions of legal responsibility comport with the advent of Artificial Intelligence is the sweetheart of academic research. However, less attention is given to the increasingly limited role traditional notions of learning patterns and brain functionality play in the changing landscape of robotic service products. Neuroimaging, organizational psychology, and systemic risk show decision making does not occur as traditionally portrayed. But rather, the human brain is not the ideal analogue to ‘machine learning’. Robots ‘learn’ by amassing recognition for relevant data and ‘decide’ by calculating the probability of a desired outcome based on the input received, as applied in numerous permutations of a given function.
Bypassing whether robots can be liable, this Paper focuses on the extent to which machine learning heightens robotic accountability, and asks, at what point ought the law hold robots liable because the decision creating the harm was not a function of software programming on the front end, but a function of robotic choice? This Paper recommends a variation of Ugo Pagallo’s ‘digital peculium’ liability scheme for ‘hard cases’ – where fully autonomous robots make decisions absent appropriate linkage to the original programmer and, thus, fall outside the scope of pre-programmed uncertainty. Situating Pagallo’s ‘hard cases’ in the larger abstraction laid out by HLA Hart and Ronald Dworkin, this Paper concludes by considering whether determination of a right answer, or conclusive indetermination of any, exists for application of legal accountability to ever-increasing robotic autonomy.
Sheriff, Katherine D, Defining Autonomy in the Context of Tort Liability: Is Machine Learning Indicative of Robotic Responsibility? (December 12, 2015).