As artificial intelligence has become widely used, governments are looking to take advantage of the potential opportunities while minimizing the dangers of AI technology.
This year, the federal lawmakers are considering a handful of bills that address artificial intelligence. They range in from requiring transparency and disclosure for AI-generated content to establishing protections against deepfake content that looks like real people and studying the environmental impact of artificial intelligence.
President Joe Biden has also issued an executive order on AI, but so far, none of the Congressional legislation has reached his desk to become law.
Over the past year, the action for AI lawmaking has been at the state level. At least 13 states have already enacted laws relating to Artificial Intelligence, and nearly every state legislature is considering a bill that would either have the state study, use or regulate AI.
Generative AI’s explosion in 2023 brought about increasing interest among lawmakers across the country, said Heather Morton, director of financial services, technology and communications at the National Conference of State Legislatures (NCSL), a group that provides resources and professional development for legislators and their staff.
“Policymakers and people in technology sat up and said ‘you know, we need to pay attention to this,’” Morton told Government Market News.
NCSL does not write legislation or advocate for any specific legislation or policy issue, but the group has hosted a Task Force on Artificial Intelligence, Cybersecurity and Privacy since 2020 that includes Democrat and Republican legislators from state houses across the country.
Most of the initial legislation passed by states orders studies on the potential uses and effects of AI and tasks the state to take inventory of any AI technology already in use by state agencies or local governments.
The shift in 2024 is toward reigning in the most harmful realities that are possible with artificial intelligence. Whether it’s in Connecticut, Kentucky or California, state legislatures from coast to coast are considering bills that will require disclosure that video is AI-generated, criminalize nonconsensual deepfakes and stop AI-generated misinformation from influencing voters.
Most Common Types of AI Laws
State Study: For many states, studying AI is a starting point to more consequential policies. This type of bill creates a council within the government to assess AI technology. The studies tend to focus on government uses for Artificial Intelligence, dissecting where it will be most useful and what the risks are. Although AI has the potential to make government work more efficiently, there has been pushback over data privacy and giving non-human technology too much decision-making power.
In many cases, state agencies have already been using artificial intelligence. In Texas, HB 2060 in 2023 created an AI Advisory Council within the governor’s office and ordered the state to take inventory of AI technology that has already been developed or deployed by any arm of state government. In other cases, bills would order a group to study how AI could be used in one particular area of government such as education, healthcare or transportation.
Disclosure and Transparency: Generative AI video is capable of creating highly realistic video clips based on user prompts. A handful of states are looking to require AI-generated content to carry a label disclosing that it was not human made. Lawmakers are also considering data-transparency requirements for companies that create AI applications. AI developers would have to disclose the training data and algorithms used for technology that generates content or makes decisions.
AI systems that make high-stakes decisions such as hiring, insurance and bank loans could face further scrutiny. Because they train on real-world data, some experts fear AI systems could reproduce social inequalities from the real world.
Nonconsensual Deepfakes: Because generative video and images have become so realistic, lawmakers are trying to prevent the proliferation of sexually explicit images created with AI. Some bills effectively criminalize AI-generated child pornography, while others take a broad approach to levy consequences for using AI to imitate the likeness of any person without their permission.
Some states go further than sexually explicit content and prevent anyone from profiting off an unlicensed AI reproduction of a real person’s image or voice. This type of legislation has implications for the entertainment industry, where AI-generated music has gone viral and actor’s unions have raised alarms about the potential for AI to replace their work.
Electioneering and Disinformation: Artificial Intelligence could be used to influence elections. Deepfakes of candidates and public officials have the potential to sway voters with disinformation. Some states are looking to stop people from creating or sharing deepfake content of public officials.
Voters could be impacted by believing a candidate said something that they did not really say. Even for public officials that are not running for office, some state lawmakers fear deepfake content that looks like an official government source could create chaos.
As lawmakers go about protecting against what they view as the dangers of AI, they are trying to strike a balance that allows AI to continue developing and does not hinder individual rights to free expression. That is a line Kentucky State Sen. Whitney Westerfield believes he has found with SB 317, which would outlaw profiting from deepfake content that duplicates the likeness of any real person without their approval.
“Nothing in my bill prohibits the commercial use of AI-generated content with the consent or licensing by the owner of that name, that likeness, and that image,” Westerfield told Government Market News. The bill was brought to him by the Recording Industry Association of America, a group representing major record labels and recording artists.
The Kentucky bill also does not prohibit making deepfake content for personal or parody use. The law kicks in when someone begins profiting from unlicensed AI reproductions. SB 317 has passed the Kentucky senate and is awaiting a vote in the house. Westerfield also supports separate bills to criminalize AI-generated child pornography and stop deepfakes of public officials and candidates from spreading disinformation.
“All these bills are aimed at creating a safe place for AI to still be used,” Westerfield said. “It’s important that we have some rules in place.”
In some cases, lawmakers are pairing different AI-related objectives into one bill. For example, in Florida, SB 1680 both creates an advisory council to monitor new technology and outlaws anyone from creating or seeking out AI-generated child pornography. The bill passed both the Florida house and senate unanimously.
In Connecticut, a major piece of AI legislation, SB 2, is currently awaiting a vote. In its current form, the bill would regulate the use of AI in high-stakes decision-making, require transparency on how generative AI is being trained and crack down on sexual content and election disinformation. At the same time, measures in the bill would also invest in AI education, pilot programs and create incentives for uses of AI that contribute positively to public health.
The bill is out of committee and awaiting a floor vote. Connecticut State Sen. James Maroney, a sponsor of the bill, said states need to be proactive rather than reactive to promote responsible use of AI.
“This is not the first disruptive technology we’ve had,” Maroney told Government Market News. “What we’re trying to do is to get ahead of it. In many ways with technology, it seems often it gets too far out of the barn before we try to regulate it.”