Peter Wills, ‘Care for Chatbots’

ABSTRACT
Individuals will rely on language models (LMs) like ChatGPT to make decisions. Sometimes, due to that reliance, they will get hurt, have their property be damaged, or lose money. If the LM had been a person, they might sue the LM. But LMs are not persons.

This paper analyses whom the individual could sue, and on what facts they can succeed according to the Hedley Byrne-inspired doctrine of negligence. The paper identifies a series of hurdles conventional Canadian and English negligence doctrine poses and how they may be overcome. Such hurdles include identifying who is making a representation or providing a service when an LM generates a statement, determining whether that person can owe a duty of care based on text the LM reacts to, and identifying the proper analytical path for breach and causation.

To overcome such hurdles, the paper questions how courts should understand who ‘controls’ a system. Should it be the person who designs the system, or the person who uses the system? Or both? The paper suggests that, in answering this question, courts should prioritise social dimensions of control (for example, who understand how a system works, not merely what it does) over physical dimensions of control (such as on whose hardware a program is running) when assessing control and therefore responsibility.

The paper make further contributions in assessing what it means (or should mean) for a person to not only act, but react via an LM. It identifies a doctrinal assumption that when one person reacts to another’s activity, the first person must know something about the second’s activity. LMs break that assumption, because they allow the first person to react to information from another person without any human having knowledge. The paper thus reassesses what it means to have knowledge in light of these technological developments. It proposes redefining ‘knowledge’ such that it would accommodate duties of care to individuals when an LM provides individualised advice.

The paper then shows that there is a deep tension running through the breach and causation analyses in Anglo-Canadian negligence doctrine, relating to how to describe someone who takes an imprudent process when performing an act but whose ultimate act is nonetheless justifiable. One option is to treat them as in breach of a standard of care, but that breach did not cause the injury; another is to treat them as not in breach at all. The answer to this question could significantly affect LM-based liability because it affects whether ‘using an LM’ is itself treated as a breach of a standard of care.

Finally, the paper identifies alternative approaches to liability for software propounded in the literature and suggests that these approaches are not plainly superior to working within the existing framework that treats software as a tool used by a legal person.

Wills, Peter, Care for Chatbots (May 1, 2024). Forthcoming in a 2024 special issue of University of British Columbia Law Review (57:3).

Leave a Reply