š¤„ Faked Up #15
Google reimagines deceptive photo editing, Facebook's AI single parent cops drive a content farm and Telegram's channels for undresser apps reach hundreds of thousands
This newsletter is a ~7 minute read and includes 40 links. Iā¤ļø to hear from readers: please leave comments or reach out via email.
THIS WEEK IN FAKES
Pakistan dropped charges against the man who misidentified the Southport stabber. The company that distributed the Biden deepfake robocall agreed to a $1M fine. McAfee released an audio deepfake detector. Australia criminalized non consensual deepfake porn. Zuck got played (context | analysis). A deadbeat dad playing dead got his dues. Megalopolis retracted a trailer full of fake reviews.
TOP STORIES
Corrections: I noticed after publishing last weekās newsletter that āAI infoā labels appear on some of the images in my sample when viewing Instagram from the iOS app rather than via the web. I apologize for the inaccuracy. More humorously, I left a ānotā out of a sentence, flipping my position on kink-shaming. As always, my corrections policy is here.
GOOGLE REIMAGINES SCAMS
Google (and others) have been releasing AI editing tools for the past year. But the āReimagineā feature released with the new Pixel 9 phone amps up the risk factor.
I havenāt gotten my hands on the phone yet but, as demoed, the functionality works like this: You click on a part of the photo, write a description of what you would like added and the phone adds it in. In its communication material, Google suggests using Reimagine to add a sunset to a mountain scene or a volcano to a picture of a child.
The Verge did a great job showcasing how the tool can be used to distort reality in more unsavory ways. Others also messed around with it and shared their results: You can add a tank on a side street, reptiles to a meal and a syringe next to an enemy. You can change the weather, or add a UFO.
It is not difficult to imagine how people might use this tool to manufacture evidence. I was reassured in talking to Cornellās Ben Sobel that the legal system is probably ready for this:
Witnesses have always been able to lie or mislead, and the sorts of penalties that deter lies in spoken testimony will be available to deter fabricated images. I think these technologies will reaffirm the importance of chain of custody, but thatās always been important ā the same principle that allows us to trust that a knife offered into evidence was actually the same knife found at the crime scene is the principle that will allow us to trust that surveillance footage being offered into evidence is a contemporaneous record that hasn't been tampered with.
But even if these images donāt upend the legal system, they can be used for scams and misinformation.
Vandalize a house with hateful graffiti and stoke communal hatred. Add a stack of ballot boxes in a photo of a ditch and seed doubt in the election result. Show an explosion near a poll site and depress voter turnout. Deepfake an accident to a parked car and extract money from its owner to get it to the auto shop.
What is more, some of the standard tools used to verify images will be rendered useless by Reimagine. Fact-checkers often use weather records, street signs and landmarks that can be verified with satellite imagery or Google Earth to confirm that an image is from where it claims to be. These verification tools will be increasingly ineffective against the blended unreality that Googleās Pixel phone offers up.
Additional guardrails are needed. The AI label in the metadata should be made visible and hard to remove; it should travel with the photo as it gets shared by email, messaging app and social network. Google should use geo-fencing to avoid the use of Reimagine around sensitive locations like police stations, places of worship, schools and polling places.
Some will dismiss this as alarmist, pointing to reactions to the launch of Photoshop. But Reimagine makes faking much much simpler. The only compelling argument left for not worrying may be that few people actually buy Pixel phones ā but Samsung is doing the same thing.
SINGLE PARENT COPS OF AI
American Patriots is a Facebook page that has amassed more than 200 thousand likes since its launch in June. Several of its posts are just pictures of AI-generated single parents in uniform holding a child they kept despite being divorced / widowed / otherwise abandoned. The photos get thousands of likes.
Despite negative reviews and several comments flagging pictures as AI-generated, many users engage with the content at face value. Take this photo of āLaraā and her mother. Patriotic commenters appear unfazed by the disastrous rendition of the American flag and the use of what appears to be Aurebesh on the little girlās name tag.
Keep reading with a 7-day free trial
Subscribe to Faked Up to keep reading this post and get 7 days of free access to the full post archives.