California bans political deepfake videos ahead of 2020 elections
Following on from a similar law instituted in Texas last month, California has now officially banned the distribution of maliciously deceptive audio and video content that misrepresents political candidates ahead of general elections. Questions remain over how effective or necessary these laws are as civil liberties advocates argue current regulations should already prohibit distribution of fake, defamatory or misleading political information.
Deepfake technology generally refers to a relatively modern AI image-processing technique where one face is seamlessly superimposed onto another in a photo of video. The term originally appeared in late 2017 when faces of celebrities appeared on the bodies of female actors in pornographic videos. Since then the term has become more synonymous with videos that have been generally manipulated in some way to make it seem as if someone is saying or doing something they never said or did.
After a doctored video of House Speaker Nancy Pelosi appeared to go viral in May, giving the impression she was slurring her words, US Congress held a hearing to discuss how to manage this modern concern. The infamous Pelosi video was not a real deepfake, but it did highlight the general issue surrounding modern video doctoring technologies and what potential influence these altered videos could have on elections.
In early September Texas became the first US state to prohibit this kind of technology in relation to general elections. The Texas law made it a misdemeanor to publish or distribute a video of a political candidate within 30 days of an election, “with the intent to deceive, that appears to depict a real person performing an action that did not occur in reality.”
Now, California has followed suit with a similar, albeit somewhat weaker law, extending the window of prohibition before an election from 30 to 60 days, but only allowing the subject of a deceptive video to seek injunctive or equitable relief. So while Texas made the offense punishable by up to a year in jail, California simply allows candidates to seek damages against those distributing malicious videos.
California assemblyman Marc Berman generated this particular bill after seeing the spread of the Nancy Pelosi video. He argues deepfake videos must not be weaponized as part of misinformation campaigns to affect election results.
“In the context of elections, the ability to attribute speech or conduct to a candidate that is false – that never happened – makes deepfake technology a powerful and dangerous new tool in the arsenal of those who want to wage misinformation campaigns to confuse voters,” Berman says.
There are concerns these new deepfake laws could raise more problems than they solve. Back in February the Electronic Frontier Foundation (EFF) suggested any new laws specifically targeted at this new form of video manipulation could threaten beneficial uses of the technology and raise unnecessary constitutional problems.
David Greene, EFF’s Civil Liberties Director, claimed a number of existing laws already protect against harmful photo-manipulation and the dissemination of false information, so instead of bringing in new laws, the current legal system should simply exercise the pre-existing regulations.
“Yes, deepfakes can present a social problem about consent and trust in video, but EFF sees no reason why the already available legal remedies will not cover injuries caused by deepfakes,” Greene wrote in February.
Kevin Baker, from The American Civil Liberties Union of California, delivered a response to this newly established Californian law arguing the regulation is unnecessary and will most likely result in general voter confusion.
“Despite the author’s good intentions,” Baker wrote in response to the new California deepfake law, “this bill will not solve the problem of deceptive political videos; it will only result in voter confusion, malicious litigation, and repression of free speech.”