Richards, Hartzog and Francis, ‘Comments of the Cordell Institute on AI Accountability’

ABSTRACT
Responding to NTIA’s recent inquiry into AI assurance and accountability, we offer two main arguments regarding the importance of substantive legal protections. First, a myopic focus on concepts of transparency, bias mitigation, and ethics (for which procedural compliance efforts such as audits, assessments, and certifications are proxies) is insufficient when it comes to the design and implementation of accountable AI systems. We call rules built around transparency and bias mitigation ‘AI half-measures’, because they provide the appearance of governance but fail (when deployed in isolation) to promote human values or hold liable those who create and deploy AI systems that cause harm. Second, any rules and regulations concerning AI systems must focus on substantive interventions rather than mere procedure. Flexible consumer protection standards, such as prohibitions on unfair, deceptive, and abusive acts or practices, are the kind of technology neutral measures which will protect individuals from harmful or unreasonably risky deployments of AI systems and encourage responsible innovation. Woven together as a vast regulatory fabric, these principles can invigorate and strengthen procedural tools such as audits and certifications, to the benefit of consumers both individually and as a group.

Richards, Neil M and Hartzog, Woodrow and Francis, Jordan, Comments of the Cordell Institute on AI Accountability (June 12, 2023), available at https://www.regulations.gov/comment/NTIA-2023-0005-1291 (Comment ID: NTIA-2023-0005-1291).

Leave a Reply