How Long Can Congress Fake It to Make It Before Regulating AI Deepfakes
AI generated media is becoming indistinguishable from reality. Prior to the 2024 presidential election, there was a concern that AI deepfake videos and audio could spread misinformation that would tarnish election results. Some argue that AI deepfake videos are always protected under the First Amendment, and others believe they should be regulated. Now that the election has concluded, how should Congress act on these concerns in preparation for the 2028 election considering AI technology will further develop.
Attribution
People have started using AI technology to create satirical deepfake videos and post them online. Have AI deepfakes been adopted too quickly to where society is experiencing its detriment before it is properly regulated? Governor of California, Gavin Newsom, recently tried to tackle this issue by passing a bill – AB 2839 – on September 17, 2024. The bill makes “‘materially deceptive audio or visual media of a candidate’ illegal 120 days before an election and 60 days after an election.” The bill came into fruition in response to insensitive and potentially harmful videos about political candidates that circulated on the internet prior to the 2024 presidential election.
For example, Christopher Kohls, a social media influencer, recently posted an AI-generated video making fun of presidential candidate Kamala Harris. Elon Musk, the owner of “X,” reposted this video to his 198.2 million followers in late July and the video accumulated well over 100 million views. In the controversial video, Kamala Harris mocked herself by saying that she was a “diversity hire” and if you disagree then you are “sexist and racist.” The post offended and misinformed millions of citizens across the nation. Governor Newsom immediately acted by stating that his new bill would make manipulated political ads, like Kohl’s, illegal. Although Governor Newson passed the bill in good faith to deter the spread of misinformation, many believe the bill violated basic First Amendment rights.
In response to Governor Newson, U.S. District Judge John A. Mendez agreed with Governor Newsom that AI deepfakes pose significant risks, but believed that Newsom’s bill likely violated the First Amendment. He claimed the bill “hinders humorous expression and unconstitutionally stifles the free and unfettered exchange of ideas which is vital to American democratic debate.” The judge stated that even “‘if AB 2839 were only targeted at knowing falsehoods that cause tangible harm, [those] falsehoods as well as other false statements are precisely the types of speech protected by the First Amendment.” He furthers his point by asserting that the Supreme Court in New York Times v. Sullivan determined that even intentional lies against the government were protected under the Constitution.
Furthermore, Judge Mendez agrees that AI deepfakes pose serious risks but believes that Governor Newsom’s bill goes too far. The judge went on to say that the law failed to implement a “narrow tailoring and least restrictive alternative” under strict scrutiny for content-based laws, as its broad scope aimed to prevent deceptive media that likely damages a candidate's reputation. First Amendment specialists agreed with Mendez and insisted that a body of law exists to determine if a statement is defamatory. In addition, AB 2839 required a disclaimer for the duration of the video that is not smaller than the largest font; this practice would likely make the post unwatchable and give legislators too much power over the First Amendment. Despite their differences, Mendez found common ground with Newsom where audio-only media should read disclaimers at the start, end, and every two-minute interval of the video.
Although Mendez shut down Newsom’s bill, there seems to be a consensus that not all AI deepfakes should be permissible. In recent years, federal and state governments passed legislation to regulate AI deepfakes. States such as Texas, Florida, Louisiana, Oregon, etc. have taken action to criminalize and/or prevent the circulation of AI deepfakes. Federal acts such as the Deepfake Report Act of 2019, the Deepfake Accountability Act, the Defiance Act of 2024, and the Protecting Consumers from Deceptive AI Act were also passed to deter similar deepfake content. Times are changing, and it is evident that state and federal law is trying to keep up.
Now that the 2024 presidential election has concluded, Congress is challenged to address AI deepfake issues that may have occurred in the 2024 election and prepare for what may come in the 2028 election. Although AB 2839 was not accepted in whole, the bill addressed exigent problems with current AI deepfake legislation and certain elements from the bill could influence future legislation. For example, disclaimers embedded within AI audio media present a solution to AI deepfakes that current state and federal law has not addressed. Additionally, because some AI deepfake issues overlap with current defamation tort law, legislators might create a new body of law that addresses AI deepfakes or supplement current defamation tort law with new changes.
AI deepfake videos can be unconstitutional but so can the laws that regulate them. State and federal governments must simultaneously fight for each citizen’s First Amendment rights while deterring misinformation and protecting the integrity of elections. Using AB 2839 as a baseline, how will legislation evolve in the next four years to prepare for stronger, more realistic AI deepfakes that threaten the integrity of elections? Regardless of how AI deepfake issues are addressed, it will be interesting to see how legislators creatively tackle the nuances of deepfakes in the coming years.