🤥 Faked Up #21
California's law mandating disclosure of election deepfakes was struck down, Russian hackers targeted users of AI nudifiers, and Meta AI is probably not good enough to be dangerous
This newsletter is a ~7 minute read and includes 37 links.
HEADLINES
Character AI deleted a custom chatbot impersonating a murder victim. The UN estimates cyber scams cost victims in East and South East Asia as much as $37 billion. Meta removed more than 8,000 AI-generated celebrity investment scams since April. Instagram ran an ad falsely claiming a candidate for mayor in São Paulo did cocaine. The platform’s misinfo-matching algorithm is flagging false positive. TikTok is running ads for supplements that double up as North Korean propaganda. The blue checkmark may be coming to Google.
TOP STORIES
ONEROUS LABELING REQUIREMENTS
California’s election deepfakes law lasted all of two weeks. On Oct. 2, US District Judge John A. Mendez issued a preliminary injunction blocking the law on First Amendment grounds.
The law had been challenged by Christopher Kohls (“Mr Reagan”), the conservative creator behind the viral Kamala Harris parody ad.
From the decision:
AB 2839 does not pass constitutional scrutiny because the law does not use the least restrictive means available for advancing the State’s interest here. As Plaintiffs persuasively argue, counter speech is a less restrictive alternative to prohibiting videos such as those posted by Plaintiff, no matter how offensive or inappropriate someone may find them. ‘“Especially as to political speech, counter speech is the tried and true buffer and elixir,” not speech restriction.’
California’s law would have prohibited the dissemination of “materially deceptive content” of a candidate, elected official, elections officer or voting instrument if that content could harm the candidate’s reputation or falsely undermine confidence in the election. This is how the law defined “materially deceptive content:”
audio or visual media that is intentionally digitally created or modified, which includes, but is not limited to, deepfakes, such that the content would falsely appear to a reasonable person to be an authentic record of the content depicted in the media.
Protecting a candidate’s reputation certainly seems like a step too far. A visual recreation of well-founded allegations of corruption against (say!) a New York City mayor could harm their reputation for all the right reasons.
Still, it seems to me that the intent of the law was not to restrict speech as much as to mandate counter speech in the form of transparency labels.
The labels had to indicate that the content “has been manipulated” in a way that was easily readable and in a font no smaller than the largest font size used in the image or video. They would also have to appear for the entire duration of a manipulated video clip, or at standard frequencies in an audio clip.
Kohls complained that these “onerous labeling requirements” were impractical and the law put him “at immense financial risk.” The font size requirements probably did need to be reworked. As the plaintiff notes, a disclosure on his Harris parody video would have rendered the video unwatchable:
This seems like an easy fix. But the judge didn’t spend too much time on labeling. In fact, he said that part was probably OK:
The safe harbor carveouts of the statute attempt to implement labelling requirements, which if narrowly tailored enough, could pass constitutional muster.
Free speech experts agree. Here’s Alex Abdo, litigation director at the Knight First Amendment Institute at Columbia University:
The court was right to hold that the law’s core prohibition is unconstitutional. While the state has an interest in protecting the integrity of our elections, the First Amendment does not permit the government to outlaw speech about an election except in very narrow circumstances. The better way to address the risks that motivated the law would be to pass a labeling requirement, narrowly targeted at the uses of AI most likely to cause confusion or harm. A carefully drafted law along those lines would likely be constitutional.
It’s unclear whether such a carefully drafted law is coming in California, especially in time for the election. Governor Gavin Newsom’s spokesperson made it sound like the state will appeal, but I have seen no additional details in the following days.
Newsom may well be partly to blame for this debacle. It was silly of him to single out a parody ad and give Kohls the ground to cast himself as a free speech martyr (from the complaint: “Plaintiff responded to Governor Newsom’s threat of legal and regulatory action as anyone should against a bully—he punched back, posting another AI-generated, Harris-mocking video”).
What’s more, as I noted in FU#11, Kohls had disclosed the Harris video as parody and even, on YouTube, that he had used AI. Making this the centerpiece of a push against deceptive deepfakes invited the lawsuit that ultimately doomed the bill.
What’s left to see is whether this will inspire challenges to laws mandating deepfake disclosure in other states. While laws in Oregon and Washington appear to a non-lawyer like myself to be more narrowly scoped and possibly safe, California’s bill reads a lot like the one enacted in Indiana that recently made the news.
AI-GENERATED = WORSE
Speaking of labels, researchers are probing their effect on user’s perception of AI-generated content and the people disseminating it.
In a study on PNAS Nexus, two political scientists at the University of Zurich concluded that “labeling headlines as AI-generated reduced the perceived accuracy of the headlines and participants’ intention to share them, regardless of the headlines’ veracity (true vs. false) or origin (human- vs. AI-generated).” Still, the effect was relatively small: a 2.66 percentage point decrease for a “generated by AI” headline compared to a 9.33 percentage point decrease for content labeled as “false.”
In a separate report, the NYU’s Center on Technology Policy tested the effect of AI labels on political ads. They conclude that “in nearly all conditions, when subjects saw a label on an ad, they rated the candidate who made the ad as less appealing and less trustworthy.”
OH NO, BAD PEOPLE GOT HACKED :(
According to cybersecurity firm Silent Push, Russian hackers have set up a group of AI nudifiers to infect targets with malware. The company told 404 Media that they believe the notorious threat group Fin7 may be behind the websites, which look just like any other AI undressing service. When you go to upload the image you want to undress, however, the sites open up a link to “download a trial” that’s actually an infostealer.
META’S BLENDED UNREALITY
I’ve previously written about the Reimagine feature on Pixel 9 that lets you dramatically and realistically alter images within seconds and with few guardrails.
But Google isn’t alone; Meta AI’s image manipulation product works in a similar way. Accessible to everyone with an Instagram or WhatsApp account, the chatbot balked at obviously sensitive keywords but with minimal rephrasing provided me with an image that could be used to spread false narratives of voter fraud at a NYC voting location.
Keep reading with a 7-day free trial
Subscribe to Faked Up to keep reading this post and get 7 days of free access to the full post archives.