š¤„ Faked Up #12
Google is running ads for undresser services, the world is seeking fact checks about Imane Khelif, and AI blends academic results with "some people's" beliefs about sleep training.
This newsletter is a ~7 minute read and includes 44 links.
TOP STORIES
STILL MONETIZING
Google announced on July 31 new measures against non consensual intimate imagery (NCII). The most significant change is a new Search ranking penalty for sites that āhave received a high volume of removals for fake explicit imagery.ā This should reduce the reach of sites like MrDeepFakes, which got 60% of its traffic from organic search in June1 and has a symbiotic relationship with AI tools like DeepSwap that fuel the production of deepnudes.
This move is very welcome and I know the teams involved will have worked hard to get it over the finish line.2
There is much more the company can do, however. The sites that distribute NCII are only half of the problem. The āundresserā and āfaceswapā websites that generate non consensual nudes are mostly unaffected by this update, because they donāt typically host any of the non consensual deepnudes that their users create.
And these sites are also getting millions of visits from Search:3
It is these undressers that are at the center of Sabrina Javellanaās heart-wrenching story. In 2021, Javellana was one of the youngest elected officials in Floridaās history when she discovered a trove of deepnudes of her on 4chan. I urge you to read the New York Times article on how this abuse drove her away from public life.
Google is not yet ready to go after undressers in Search (a mistake, imo). But it has promised to ban all ads to deepfake porn generators.
Unfortunately, it is breaking that promise. Over the past week, I was able to trigger 15 unique ads while searching for nine different queries4 related to AI undressing. Below are three examples (ads are labeled āsponsoredā):
The ads point to ten different apps or websites. As is typical for this ecosystem of AI-powered grifting, these tools offer a variety of text- and image-generation services and itās hard to tell the extent to which their dominant use cases is always NCII.
This is the case of justdone[.]ai, which appears to have run hundreds of Google Ads. Most of these were for legitimate use cases, but they included a couple advertising an āUndresser aiā (as well as two ads for AI-written obituaries, which have previously gotten Search into trouble).
And deepnudes are clearly a core service for several of the sites I found advertising with Google, including ptool[.]ai, which promises that āour online AI outfit generator lets you remove clothesā and mydreams[.]studio, which has run at least 15 Google ads.
MyDreams suggests female celebrity names like Margot Robbie as āpopular tagsā and gives users the option to generate images through the porn-only model URPM.
Google is not getting rich off of these websites, nor is it willfully turning a blind eye. These ads ran because of classifier and/or human mistakes and I suspect most will be removed soon after this newsletter is published. But undressers are a known phenomenon and their continued capacity to advertise on Google suggests that teams fighting this abuse vector should be getting more resources (maybe they could be reassigned from teams working on inexistent user needs).
And to be clear, Google is not alone. One of the websites above was running ads on its Telegram channel for FaceHub:AI, an app that is available on Appleās App Store and promises you can āchange the face of any sex videos to your friend, friendās mother, step-sister or your teacher.ā Last week, Context found four other such apps on the App Store advertising on Meta platforms.
BEHIND THE SLOP
Jason Koebler at 404 Media has been on the Facebook AI slop beat longer than most (š«”, Jason). On Tuesday, he dropped a must-read deep dive on the influencers teaching others how to make money flooding Facebook pages with crappy images generated on Bing Image Creator with prompts like āA African boy create a car with recycle bottle forest.ā One slopmaster allegedly made $431 āfor a single image of an AI-generated train made of leaves.ā
I think this article is to AI slop what Craig Silvermanās piece on Macedonian teens was to fake news in 2016, because it reveals that behind the wasteland of Facebookās Feed is a motley crew of enterprising go-getters making algorithm-pleasing low-quality content in an attempt to make an easy buck.
IMANEāS MEANING
The controversy over Olympian Imane Khelif has been widely covered, so Iāll only share three quick things. First, this AP News article about the International Boxing Association, the discredited association that seeded the doubts about her sex and chaotically failed to produce any evidence. Second, the āunacceptable editorial lapseā that led The Boston Globe to misgender Khelif. And finally, a glimmer of hope: āfactā and āfact-checkingā were top rising trends related to Google Search queries about the Algerian athlete over the past week, suggesting many folks around the world are just trying to figure out whatās going on:
HOW FACT-CHECKERS USE AI
Tanu Mitra and Robert Wolfe at the University of Washington interviewed 24 fact-checkers for a preprint on the use of generative AI in the fact-checking process. I couldnāt help but think that the use cases described ā mostly around classifying or synthesizing large sources of information ā are incremental rather than transformative.
Keep reading with a 7-day free trial
Subscribe to Faked Up to keep reading this post and get 7 days of free access to the full post archives.