Skip to Main Content

AI Ethics Toolbox

AI ethics resources for faculty use. Including valuable AI tools specifically relevant for teaching AI ethics.

In this TED Talk, mathematician and data scientist Cathy O'Neil delves into the pervasive role of algorithms in modern society, emphasizing their impact on critical decisions such as loan approvals, job interviews, and insurance eligibility. However, she cautions that these algorithms do not inherently ensure fairness. O'Neil introduces the concept of "weapons of math destruction" to describe algorithms that are not only secretive and influential but also potentially harmful. This talk sheds light on the often-hidden biases and agendas embedded within these mathematical models.

 

This video explores San Francisco's recent ban on the use of artificial intelligence (AI) for facial recognition. The decision arises from concerns about the system's reliability, particularly its inadequacies in accurately identifying Black individuals and, notably, Black women. Additionally, it covers the UK's response, where the government has established an official unit to investigate biases in facial recognition and other aspects of AI.

 

This video explores a recent report from a coalition of healthcare providers that hospitals across the nation are utilizing software systems that incorporate algorithms with racial biases. This bias can result in misdiagnosis or delays in critical medical treatments by healthcare professionals. Dr. Jayne Morgan, a cardiologist and president-elect of Southeast Life Sciences, elaborates on these concerns in her discussion with Geoff Bennett.

 

The video highlights concerns about racial bias in technology, particularly in algorithms used for facial recognition and identity verification. It features the case of a driver who was unfairly dismissed by Uber due to algorithmic errors linked to racial bias. The discussion also touches on historical and contemporary examples of technological discrimination and advocates for improved regulation and oversight to address these biases effectively.

 

The documentary explores the growing influence of artificial intelligence (AI) and the inherent biases within these technologies. It highlights the issue of bias in AI, such as facial recognition systems that fail to accurately identify individuals of darker skin tones due to skewed training datasets. The film emphasizes that AI reflects historical inequalities and biases, influenced by the limited and homogenous perspectives of its developers. It calls attention to the need for greater awareness and regulation to ensure AI technologies do not perpetuate existing social injustices.

 

The article discusses the shortcomings and biases of automated systems used in public service delivery. It highlights cases where automation led to severe consequences for vulnerable populations, such as Medicaid benefits being cut off from a cancer patient, and issues with welfare, homelessness, and child welfare systems. Virginia Eubanks, author of Automating Inequality, argues that these systems often exacerbate existing inequalities rather than alleviate them, reflecting biases embedded in the data and algorithms. She advocates for more human-centered approaches and careful consideration of how these technologies affect marginalized communities.

 

In the third installment of the You and AI series, Cynthia Dwork from Harvard delves into the emerging field of algorithmic fairness. She explores the complexities and challenges involved in designing algorithms that make impartial decisions, emphasizing the difficulties in ensuring that machine learning systems operate without bias.

 

An algorithm used nationwide is found to underestimate the health needs of Black patients by relying on healthcare costs as a proxy, which are skewed due to systemic racism. This highlights how machine learning can perpetuate and amplify existing inequalities. The article advocates for causal models to better identify and address underlying biases in data, emphasizing their advantages over conventional fairness measures like demographic parity and individual fairness. It discusses how causal models can test algorithmic fairness through counterfactuals, sensitivity analysis, and impact assessment. Collaborative, interdisciplinary approaches and stakeholder involvement are crucial for developing equitable algorithms.