🤥 Faked Up #20
Researchers say CAPTCHAs are doomed, Thousands of US schools are likely affected by deepfake nudes, and AI slop pivots to Hurricane Helene
This newsletter is a ~5 minute read and includes 39 links.
THIS WEEK IN FAKES
The FCC imposed a $6M fine on the consultant behind the Biden deepfake robocall. South Korea’s parliament passed a bill to criminalize watching deepfake porn. U.S. Senator Cory Booker blocked the Take It Down Act. Stanford invited an Epoch Times editor to moderate a panel on the origins of COVID. Shocker: X still has a bot problem. An Indiana senator may have broken state law with a deceptively altered ad. In Brazil’s local elections, most (disclosed) use cases of AI in political ads involved synthetic jingles. Kamala Harris did not kill a rhino.
TOP STORIES
GOT TO CAPTCHA ‘EM ALL
In a preprint, researchers at ETH Zurich claim that advanced YOLO Models can solve 100% of reCAPTCHAv2 bot-filtering tests (I swear those were all real words). The paper shows that VPN use, mouse movement and user history all affect the likelihood of detection. The authors conclude that “we are now officially in the age beyond captchas.”
On the one hand, great! I won’t miss these capricious and deranged puzzles. But reCAPTCHAv2 is one the internet’s main defenses against automated bots. So this is probably not the best thing to happen just as generative AI unloads hordes of imitation humans in our online spaces.
WORSE THAN IT SEEMS
I have been counting the number of students mentioned in media reports of deepfake nudes in school communities to try and quantify the problem. I had no doubt that my global tally of 530 was a wild underestimate, but now I know just quite how wild.
In a new report, the Center for Democracy & Technology surveyed American public school students in grades 6-12 (roughly speaking, ages 11 to 18). 15% of these students said that they know of a deepfake depicting individuals associated with their school being shared in the past school year.
With 15M students in U.S. public high schools, this suggests the number of deepfake nudes in school settings around the country may be as high as 225,000. Even if some cases were double counted because multiple pupils from the same school took the survey1 and even if half of the students answered falsely, we’d still be looking at tens of thousands of cases. And this would assume that every case only affected one victim (unlikely) and that there were no cases in private schools (impossible).
You should read the whole report, but two other things stood out to me. First, I was surprised to see most students report that non consensual intimate imagery (authentic and deepfaked) is shared primarily via social media, not messaging apps.
Second, ~60% of teachers and students report that their school has not communicated their procedures for addressing deepfake nudes.
THE META TO TELEGRAM PIPELINE
A few weeks ago, some researchers suggested that searching for links to Telegram might be an easy proxy for Meta to detect policy-violating ads.
That advice would have helped in the case of “aitooltool,” which has reached tens of thousands of Instagram users2 with at least 18 ads for an AI undressing bot on Telegram. One of its ads is just a tutorial for the tool:
Others are only slightly less direct, inviting users to the Telegram page for a “different and surprising photo of this girl.”
Keep reading with a 7-day free trial
Subscribe to Faked Up to keep reading this post and get 7 days of free access to the full post archives.