Digital autonomous systems are characterized by their ability to make their ‘own’ decisions, ie decisions that are not fully determined by the software that animates them. As such, they pose a challenge to existing liability systems and to the general law of delict or torts. At the European level, the European Parliament took the initiative and drafted a Regulation on Liability for the Operation of Artificial-Intelligence-Systems it recommended for adoption by the Commission. The European Parliament distinguishes between high-risk-AI-systems, that shall be governed by a regime of strict liability, and ‘other’ AI-systems, that create only normal risks and are left to fault-based liability, as defined in the legal systems of the Member States. With a view to the addresses of the new liability scheme, the draft regulation distinguishes between frontend- and backend-operators. The following chapter discusses the fundamental choices which the framers of the draft made. It concludes that the focus of the proposal on user liability is misguided, as the manufacturers are the central actors who determine the safety features of AI-systems. Moreover, introducing the concept of a backend-operator creates needless friction with the Products Liability Directive. While the commitment to strict liability for high-risk-AI-systems draws a wedge into existing regimes of strict liability under national law, the imposition of fault-based liability on users of ordinary AI-systems forces some Member States to roll back more generous rules of tort law.
Wagner, Gerhard, Liability for Artificial Intelligence: A Proposal of the European Parliament (July 14, 2021).