Detroit Today: Michigan bill aims to regulate artificial intelligence in political campaigns
A new bipartisan bill in the Michigan House would require disclaimers on political ‘deepfakes’ and campaign ads that use artificial intelligence.
Emerging technologies are making it easier to generate media – audio, video and images – that appear almost indistinguishable from real people.
We’ve already seen examples of political actors using AI generated content to try and influence elections — both in the U.S. and throughout the world — which is why some lawmakers in Michigan are trying to get out ahead of the problem.
A new bipartisan bill in the Michigan House would require disclaimers on political ‘deepfakes’ and campaign ads that use artificial intelligence.
State Rep. Penelope Tsernoglou (D-East Lansing); Johanna Davidovic, an associate professor of Philosophy at the University of Iowa; and Josh Goldstein, research fellow at Georgetown’s Center for Security and Emerging Technology, joined Detroit Today on Monday to discuss how these technological advancements could affect our democracy and the ethical concerns behind their use in political campaigns.
Subscribe to Detroit Today on Apple Podcasts, Spotify, Google Podcasts, NPR.org or wherever you get your podcasts.
Guests:
Penelope Tsernoglou is a Democrat representing the 75th District in the Michigan House of Representatives. She recently introduced a bill package that regulates the use of AI-generated content in political advertisements. She says the legislation will help fight against misinformation and its impact on our elections.
“What we’re doing is trying to address the fact that AI generated images [we’ve found are] almost indistinguishable right now from real images,” said Tsernoglou. “And we want to protect our elections and our democracy from misinformation.”
Jovana Davidovic is an associate professor of philosophy at the University of Iowa. She says we need a multi-prong approach to combat the volume of AI-generated content we are exposed to.
“Education, at the individual level is one aspect of that,” said Davidovic. “I think the second aspect of that is industry regulating itself.”
Josh Goldstein is a research fellow at Georgetown’s Center for Security and Emerging Technology. He says AI language models can be used to create unique and persuasive propaganda making it hard to track misinformation.
“If you have a language model, you don’t need to use ‘copypasta’ because you could ask a model to rewrite a given message for you in different words which may look more like real public opinion.” said Goldstein.
Trusted, accurate, up-to-date.
WDET strives to make our journalism accessible to everyone. As a public media institution, we maintain our journalistic integrity through independent support from readers like you. If you value WDET as your source of news, music and conversation, please make a gift today.