The measures are designed to prevent electoral manipulation and safeguard the rights of actors.
California continues to lead the charge in AI regulation, with Governor Gavin Newsom recently signing five new laws aimed at managing the use of artificial intelligence in both politics and entertainment. These laws focus on restricting high-risk AI systems such as deepfakes and unauthorized actor cloning, addressing growing concerns about the impact of AI on electoral integrity and the rights of performers.
With California being home to some of the world’s largest AI companies, these new regulations place the state at the forefront of AI governance in the United States. Below, we explore the implications of these laws and how they align with broader AI compliance initiatives.
New AI Laws Targeting Electoral Deepfakes
Three of the newly enacted laws specifically target the use of deepfakes during election campaigns, where high-risk AI systems could be used to manipulate voters. The first, AB 2655, requires digital platforms like Facebook and X (formerly Twitter) to either label or remove deepfakes that are related to elections. This measure ensures that platforms take responsibility for regulating misleading content, a crucial step toward maintaining electoral transparency.
Preventing Misinformation with AI-Generated Political Ads
In addition to AB 2655, the law AB 2355 mandates that political advertisements disclose when AI-generated content has been used. This is part of a broader effort to address the risk of AI manipulation in the political sphere, where voters could be influenced by false or manipulated AI-generated images, videos, or audio recordings. The Federal Communications Commission (FCC) has also proposed regulations at the national level to ban AI-generated robocalls, further highlighting the need for robust AI compliance.
These laws position California as a leader in the fight against prohibited AI systems that could undermine the democratic process, building a framework for broader AI governance across the country.
Protecting Actors from Unauthorized Cloning
The other two laws signed by Governor Newsom, AB 2602 and AB 1836, directly address the entertainment industry’s concerns over unauthorized digital replication of actors’ voices or likenesses. Under these laws, studios must obtain explicit consent before creating digital clones of performers, whether they are living or deceased. This is a crucial step toward protecting actors’ rights in an era where AI technology is capable of creating hyper-realistic digital replicas.
SAG-AFTRA and the Push for AI Regulations
These laws come in response to demands from the actors’ union SAG-AFTRA, which has been advocating for stricter AI regulations to prevent the exploitation of actors through AI-generated clones. The regulations not only apply to living actors but also protect the estates of deceased performers, ensuring that their digital likenesses cannot be used without the consent of their heirs.
By implementing these protections, California is setting a precedent for other states and countries that are considering similar legislation to regulate AI systems in entertainment. These measures reflect a growing awareness of the ethical and legal implications of AI governance in creative industries.
The Future of AI Governance in California
California’s new laws represent a major advancement in AI regulation, but this is only the beginning. The state is currently considering additional proposals that address the use of high-risk AI systems in various sectors, including health care, education, and finance. As these technologies continue to evolve, so too will the legal frameworks that govern their use.
Looking ahead, California may introduce more stringent AI audit requirements to ensure that companies comply with the latest regulations, especially those related to AI ethics and transparency. This will likely involve closer scrutiny of AI systems and more frequent evaluations to ensure that they do not pose risks to individuals’ privacy or safety.
The five new laws signed by Governor Newsom represent a significant step forward in the regulation of artificial intelligence in California. By targeting high-risk AI systems such as deepfakes and unauthorized actor cloning, the state is taking proactive measures to protect both voters and performers from the potentially harmful impacts of AI technology. As California continues to shape the future of AI governance, it sets an important example for other states and regions to follow.