MADISON, Wis. (AP) — Wisconsin lawmakers on Thursday passed bills to regulate artificial intelligence, joining a growing number of states grappling with how to control the technology as November’s elections loom.
The Assembly approved a bipartisan measure to require political candidates and groups to include disclaimers in ads that use AI technology. Violators would face a $1,000 fine.
Voters need disclosures and disclaimers when AI is being used to help them determine the difference between fact and fiction, said the bill’s sponsor, Republican Rep. Adam Neylon. He said the measure was an “important first step that gives clarity to voters,” but more action will be needed as the technology evolves.
“With artificial intelligence, it’s getting harder and harder to know what is true,” Neylon said.
More than half a dozen organizations have registered in support of the proposal, including the League of Women Voters and the state’s newspaper and broadcaster associations. No groups have registered against the measure.
The Assembly also passed on a voice vote a Republican-authored proposal that would make manufacturing and possessing images of child sexual abuse produced with AI technology a felony punishable by up to 25 years in prison. Current state law already makes producing and possessing such images a felony with a 25-year maximum sentence, but the statutes don’t address digital representations of children. No groups have registered against the bill.
The Assembly also approved a bill that calls for auditors to review how state agencies use AI. The measure also would give agencies until 2030 to develop a plan to reduce their positions. By 2026. the agencies would have to report to legislators which positions AI could help make more efficient and report their progress.
The bill doesn’t lay out any specific workforce reduction goals and doesn’t explicitly call for replacing state employees with AI. Republican Rep. Nate Gustafson said Thursday that the goal is to find efficiencies in the face of worker shortages and not replace human beings.
“That’s flat out false,” Gustafson said of claims the bills are designed to replace humans with AI technology.
AI can include a host of different technologies, ranging from algorithms recommending what to watch on Netflix to generative systems such as ChatGPT that can aid in writing or create new images or other media. The surge of commercial investment in generative AI tools has generated public fascination and concerns about their ability to trick people and spread disinformation.
States across the U.S. have taken steps to regulate AI within the last two years. Overall, at least 25 states, Puerto Rico and the District of Columbia introduced artificial intelligence bills last year alone.
Legislatures in Texas, North Dakota, West Virginia and Puerto Rico have created advisory bodies to study and monitor AI systems their state agencies are using. Louisiana formed a new security committee to study AI’s impact on state operations, procurement and policy.
The Federal Communications Commission earlier this month outlawed robocalls using AI-generated voices. The move came in the wake of AI-generated robocalls that mimicked President Joe Biden’s voice to discourage voting in New Hampshire’s first-in-the-nation primary in January.
Sophisticated generative AI tools, from voice-cloning software to image generators, already are in use in elections in the U.S. and around the world. Last year, as the U.S. presidential race got underway, several campaign advertisements used AI-generated audio or imagery, and some candidates experimented with using AI chatbots to communicate with voters.
The Biden administration issued guidelines for using AI technology in 2022 but they include mostly far-reaching goals and aren’t binding. Congress has yet to pass any federal legislation regulating AI in political campaigns.
___
This story has been updated to correct that state agencies would have until 2030 to develop their position reduction plan, not a decade.
Todd Richmond, The Associated Press