The ethics issues associated with AI have attracted significant attention. A range of guidelines for ‘Ethical AI’ have been promulgated by international agencies, domestic governments and others, but a criticism of these guidelines is that they operate at a high level of generality and are difficult to operationalise. Given the situated, complex ethical issues that AI generates, high-level principles may be insufficient to the task. This present problems for directors, who will be faced with AI ethics risks yet may feel the lack of useful guidelines. Given that the vast majority of AI will be developed and applied within corporate contexts, this issue is significant.
However, existing models of corporate regulation offer a potentially useful framework within which AI ethics issues can be analysed, by directors and others. Obligations owed by directors to make decisions in the best interests of the company and to act with care and diligence already require directors to make subtle, contextualised decisions, and these duties are also responsive to evolving community norms. Further, director obligations are supported by well-understood and developed accountability mechanisms and ultimately by state-based enforcement regimes. In the context of current critiques of AI ethics principles, these are significant advantages and point to the value of directors’ duties in the evolution of AI ethics corporate practice.
Vivienne Brand, Artificial Intelligence and Corporate Boards: Some Ethical Implications, in Andrew Godwin, Pey Woan Lee and Rosemary Langford (eds) Innovation, Technology and Corporate Law (Edward Elgar Publishing, forthcoming, 2021).