Skip to Main Content

AI Ethics Toolbox

AI ethics resources for faculty use. Including valuable AI tools specifically relevant for teaching AI ethics.

The AI + Public Policy: Understanding the Shift conference, held on March 23rd, 2018, aimed to foster a common understanding of essential technical concepts and historical context among participants from various sectors. Carole Piovesan, a litigation lawyer at McCarthy Tetrault and the firm's lead on artificial intelligence, focused her talk on the challenge of accountability, particularly in assigning responsibility for the actions of AI systems.

 

In this article by Stephen Sanford, the U.S. Government Accountability Office introduces its first federal framework for AI accountability, designed to guide organizations in managing AI systems responsibly. The framework covers the entire AI life cycle—from design to monitoring—and emphasizes the need for thorough governance, data management, performance metrics, and continuous oversight. By incorporating insights from a diverse range of stakeholders, the framework aims to move from theoretical principles to practical practices, helping organizations address ethical and societal concerns effectively.

 

In this article, the concept of accountability in artificial intelligence (AI) is explored to address its often vague and varied definitions. Accountability is critical in AI governance but is frequently imprecise due to the complex sociotechnical nature of AI systems. The authors clarify accountability by framing it in terms of answerability, with three essential conditions—authority recognition, interrogation, and power limitation—and an architecture comprising seven features: context, range, agent, forum, standards, process, and implications. They evaluate this architecture against four accountability goals: compliance, reporting, oversight, and enforcement. The article argues that these goals are usually complementary and that their emphasis varies based on whether accountability is used proactively or reactively and the specific objectives of AI governance.

 

In recent years, Finale Doshi-Velez, Mason Kortz, and their team have examined the concept of explanation as a mechanism for ensuring accountability in AI systems. This exploration has contributed to the development of various frameworks for AI regulation and best practices. In their latest discussion, Doshi-Velez and Kortz revisited the topic of AI accountability and regulation, focusing on practical approaches for managing AI systems in real-world contexts. They evaluated the benefits of ex-ante versus ex-post monitoring and explored the roles of different accountability methods. Additionally, they reflected on the evolution of their perspectives and proposed potential pathways for future development in AI accountability and regulation.