Hello!
This newsletter is brought to you by flammable waste and misinformation amoebas. It is a ~7-minute read and contains 41 links
Top Stories
[âThey didnât know theyâd been listening to an AIâ] Deepfakes of Indian politicians are often on the silly end of the spectrum. Still, I was struck by this bit on Wired:
[Last year,] iToConnect delivered 20 million Telugu-language AI calls for 15 politicians, including in the voice of the then state chief minister. For two weeks before polls opened, voters were targeted with personalized 30-second callsâsome asking people to vote for a certain candidate, others sending personalized greetings on religious holidays, some just wishing them a happy birthday.
It worked. Voters started showing up at the party offices, expressing their delight about receiving a call from the candidate, and that they had been referred to by name. They didnât know theyâd been listening to an AI. Pasupuletyâs team fielded calls from confused party workers who had no idea what was happening.
iToConnect has every incentive to overstate the impact of its calls to drum up business. A similar effort by another PR firm was laughingly bad. Still, well over 50 million AI clone calls were conducted in this election cycle, a concrete manifestation of the 1-to-1 deceptive persuasion that many (including Sam Altman) thought would become AIâs silver bullet. Given the improvements in image generation quality, Iâm wary of being too skeptical that interactive audio deepfakes will get pretty believable.
But even if we stick to text alone, this exercise by The New York Times shows you can fine-tune an open source LLM on Reddit and Parler posts and create perfectly passable replicas of partisan social media posters.
So chances are that AI-generated content will become (/is already) a big chunk of election discourse online.
And good luck spotting it. In a preprint by two cognitive scientists at UC San Diego, 500 participants spent 5 minutes texting with a human or one of three AI interfaces through an interface that concealed who was on the other side. 54% of the respondents assigned to GPT-4 thought they were chatting with a human, not much lower than the share respondents who rated the actual human as a human (67%).
[Threading the facts] Instagram boss Adam Mosseri announced on May 15 that Metaâs third-party fact-checking program was now fully operational on Threads. Fact-checking partners can find and label misinformation thatâs unique to the platform; previously, it was identifying fuzzy matches of fact-checked content on Facebook or Instagram. I was able to trigger the label on this misleading post flagged by PolitiFact:
[A picture is worth 1,000 lies] Several good humans I used to work with just released this preprint taxonomizing media-based misinformation. The primarily Google-based authors trained 83 raters to annotate 135,862 English language fact checks carrying ClaimReview markup. (They are releasing their database under the suitably laborious backronym of AMMeBa.)
The study finds that almost 80% of fact-checked claims are now in some way related to a media item, typically video. This high proportion canât be ascribed only to Facebookâs money drawing the fact-checking industry away from textual claims given that the trend precedes the programâs launch in 2017.
Unsurprisingly, AI disinformation shot up since the advent of ChatGPT and its ilk.
[Cloak and casino] Aos Fatos found scammers preying on people trying to help victims of flooding in Rio Grande do Sul. According to the Brazilian fact-checkers, bad actors used cloaked urls to make at least two results on Google Search appear like information from local government websites. The links actually redirected to an online casino. Aos Fatos had found earlier this month that at least 131 local government websites have been targeted this way.
Keep reading with a 7-day free trial
Subscribe to Faked Up to keep reading this post and get 7 days of free access to the full post archives.