🤥 Faked Up #24
What I'll be watching for on America's election night, two more studies find that corrections work, and a Lebanese newspaper ran an AI fake on its front page
This newsletter is a ~6 minute read and includes 47 links. This is your last chance to take the Faked Up 2024 survey and tell me how I’m doing!!
HEADLINES
Google is adding a modicum of transparency to photos edited with Pixel’s AI tools. The company also described how its SynthID-Text watermarks work. A woman was arrested in China for lying about snowflakes online. A network of 1,800 X bots is promoting the upcoming UN climate change conference in Azerbaijan. U.S. Senator Mark Warner called on domain registrars to do more against disinformation. LinkedIn claims to have verified the identity of 55 million of its 1 billion users.
TOP STORIES
WATCH OUT PARTY
This is the last issue of Faked Up before the main event in America’s quadrennial election misinformation festival. So much has been written about efforts to discredit the validity of the democratic process in this country through falsehoods and xenophobic innuendos that it’s hard to know where to begin.
Here are a handful of articles that set the stage for November 5th:
NBC News and The New York Times looked at the network of organizations tied to Cleta Mitchell, the lawyer behind the falsehood-ridden pressure campaign to overturn the result of the 2020 presidential election in Georgia. They found that voter fraud narratives this year center heavily around non citizens.
Elon Musk got the memo. Bloomberg’s analysis of the billionaire’s tweets found 1,300 posts like this one about “importing voters” collecting 10 billion views.
Musk’s PAC has also been paying for a fake “Progress 2028” campaign that pretends to be aligned with the Democratic Party. According to 404 Media, the operation has spent more than $500,000 dollars on Meta ads touting made-up Kamala Harris policy stances likely to be unpopular with conservative voters.
And finally, Wired reported on a memo by the Department of Homeland Security warning law enforcement officials about domestic violent extremists “reacting to the 2024 election season […] by engaging in illegal preparatory or violent activity that they link to the narrative of an impending civil war, raising the risk of violence against government targets and ideological opponents.”
If polls are correct, election night will wrap up without a clear winner. Still, a lot of the main conspiracy theories that will dominate the subsequent months will emerge during and immediately after November 5th. Here’s what I’ll be doing to keep track:
Monitoring my conspiracy theory burner accounts on Instagram and Tik Tok, which I’ve been priming by searching for the election rumors in UW’s Center for Informed Politics.
Tracking the capacity of content on Truth Social (where Donald Trump posts most frequently) to hop over to more mainstream social networks.
Looking at rumors on the “election integrity community” set up by Musk’s Super PAC on X. I’m especially interested in the interplay between claims on this group and Community Notes, the crowd-checking tool that the platform has just “re-architected” to enable quicker turnaround times.
Searching for rumors about the 13 bellwether counties identified by Cook Political Report given that closer contests will inevitably spark closer scrutiny (and conspiracy theorizing).
Looking for relevant context about all of the rumors above on the Fact Check Explorer, the National Association of Secretaries of State and lists of disinfo beat reporters, election lawyers and election analysts.
What’s your plan? Share in the comments or via email.
I’ll see you all on the other side, no doubt with oodles of contested claims of voter fraud to wade through.
DETECTION DISTRACTION
Lupa sought to verify a controversial clip purporting to capture Fortaleza mayoral candidate André Fernandes discussing a scheme to purchase votes. Two of the three tools that the Brazilian fact-checkers used claimed the audio was AI-generated. A third one said it was real. As with last week’s case of “Matt Metro,” detectors proved insufficient to definitely resolve an authenticity quandary.
Separately, employees at buzzy cybersecurity startup Wiz were targeted by a deepfaked voice message of CEO Assaf Rappaport trying to get access to their credentials. The attack appears to have failed because the replica was trained on Rappaport’s public speaking, which is reportedly more anxious-sounding than his tone with staff. (For what it’s worth, I think we should abolish all audio notes.)
Overall, though, humans don’t appear all that well equipped to detect faked voices.
In a preprint, Sarah Barrington and Hany Farid at UC Berkeley used ElevenLabs to clone 220 speakers. They then had survey respondents discern whether two clips were from the same person and whether any of them were deepfaked. In almost 80% of the cases, a real voice and its audio clone were deemed to be from the same speaker (real clips of the same voice were correctly attributed to the same person 92% of the time).
Slicing the data another way, Barrington and Farid found that respondents correctly flagged an audio as synthetic 66.3% of the time. That is not much better than flipping a coin. The silver lining is that accuracy appears to increase with clip duration, though the relative rarity of longer clips means we can’t tell for sure.
FACTS MATTER
Add two more studies to the pile that says that — at least in a lab setting — individuals are open to corrections.
Keep reading with a 7-day free trial
Subscribe to Faked Up to keep reading this post and get 7 days of free access to the full post archives.