Skip to Main Content

AI Ethics Toolbox

AI ethics resources for faculty use. Including valuable AI tools specifically relevant for teaching AI ethics.

This video explores the rapid growth of artificial intelligence and its impact on society, particularly the risks posed to democracies by AI-generated disinformation. As AI becomes more sophisticated, distinguishing between real and fabricated content is increasingly difficult. With major elections taking place in 2024, the video investigates how AI-driven disinformation could be used to influence electoral outcomes, raising concerns about the vulnerability of democratic systems.

This video examines the potential misuse of artificial intelligence to spread disinformation, particularly through social media. AI and deepfakes can generate convincing, human-like content, which raises concerns about their role in amplifying false information. Efforts are being made to counteract these risks, including developing detection strategies. Professor Stuart Geiger from the Data Science Institute discusses the implications of using AI, such as ChatGPT, in social media contexts and the challenges involved in mitigating its negative effects.

 

The ACLU podcast series, available on their website, features discussions on pressing issues related to privacy and technology. The episodes cover topics like digital rights, surveillance, the impact of emerging technologies on civil liberties, and the fight to protect privacy in the modern age. Through interviews with experts, activists, and policy-makers, the podcast provides insights into how technology intersects with individual freedoms and what can be done to safeguard them.

 

The article "How AI Threatens Democracy," published in the Journal of Democracy, explores the growing impact of artificial intelligence on democratic systems. It discusses how AI technologies can be used to manipulate information, influence public opinion, and undermine trust in democratic processes. The article also highlights the dangers of AI-driven disinformation campaigns, the erosion of privacy, and the challenges that governments face in regulating these technologies.

The declassified report titled "Foreign Threats to the 2020 US Federal Elections," released by the Office of the Director of National Intelligence (ODNI), provides an assessment of foreign influence activities targeting the 2020 U.S. elections. It concludes that multiple foreign actors sought to influence voter perceptions and undermine public confidence in the electoral process. The report identifies Russia as a primary actor involved in spreading disinformation, while other nations like Iran also engaged in influence efforts.

This video discusses a recent political ad supporting Florida Governor Ron DeSantis and attacking former President Donald Trump. The ad features a voice that sounds like Trump but is, in fact, artificially generated. The video highlights concerns about the use of AI-generated content in political ads and how it can blur the line between real and fake, making it difficult for voters to discern the truth. Dr. Aubrey Jewett from UCF emphasizes the need for disclaimers to inform the public when AI is used. New York Congresswoman Yvette Clarke has proposed the Real Political Ads Act to address these issues.

 

The article "X's Misinformation Problem Is Getting Worse" by Peter Suciu, published in Forbes, discusses the growing issue of misinformation on X (formerly Twitter). It highlights how changes in platform moderation and policy enforcement have led to an increase in false information spreading unchecked. The article points out that this problem has worsened since Elon Musk's acquisition, with experts warning of the potential harm to public discourse and democracy. It calls for stricter regulations and greater transparency to address misinformation effectively.