đ€„ Faked Up #28
BlueSky has a doppelgÀnger problem, World War III narrative leans on a fake cover of The Economist, and Instagram is flooded with cozycore AI slop
Hey look, Ma! Faked Upâs findings are cited in this letter by 26 members of the U.S. Congress urging platforms to do more about non consensual deepfake nudes.
This newsletter is a ~6 minute read and includes 57 links.
HEADLINES
Australiaâs government gave up on its proposed bill against online misinformation. The UK Parliament launched a new inquiry on âsocial media, misinformation and harmful algorithmsâ and seems to believe Elon Musk will show up. An analysis of health-related sponsored content on Brazilian media outlets found that the vast majority of these ads were misleading. The New York Postâs front page on Saturday equated misinformation research with censorship. UNESCO thinks creators should get fact-checking training. A high school student in Florida who shared deepfake nudes of at least 30 of her classmates was charged of a third-degree felony; her ex-boyfriend, who created the images, walked free.
TOP STORIES
BLUESKYâS DUPLICATION DILEMMA
In last weekâs newsletter, I speculated that impersonation may be one of BlueSkyâs most immediate challenges on the disinformation front.
I can now ground that take in a little bit more data. Over the past week, I manually searched for any accounts impersonating the top 500 BlueSky users1 by follower count.
Given that itâs not always possible to ascertain that the bigger accounts themselves are legitimate â e.g. Ryan Reynolds and Arnold Schwarzenegger are in the top 500 with accounts that barely posted â I will refer to the less-followed copies of popular accounts as âdoppelgĂ€ngersâ rather than impersonators.
Of the 305 accounts representing a named individual, at least 74 have at least one doppelgÀnger (paid subscribers can access the data here2). Because many have more than one double, I found 140 doppelgÀngers in total.
Among the top 100 most followed named individuals, fully 44% have at least one duplicate. Most are cheap knock-offs of the bigger account, down to the same bio and profile picture. Only 16% of the duplicates that I reviewed had an âimpersonationâ label.
These are on average small accounts: the median doppelgÀnger has 90 followers. But consider that at the time of writing, having more than 85 followers puts you in the 95th percentile of most followed users on BlueSky.
For now, this seems like itâs cheap engagement bait. More popular accounts are more targeted, regardless of whether they are better known IRL. Most of the named individuals who got one of the 10 most reposted skeets on each of the days between Nov 21 and Nov 25 have at least one duplicate.
But sloppy verification is an early signal of broader deception and catnip for organized disinformation actors. Kamala Harris, who isnât even on BlueSky, at one point had 20 impersonator accounts.
Anecdotally, BlueSky appears to realize the importance of this issue. Over the past week, semi-viral posts by impostor accounts for Vermont Senator Bernie Sanders and Trump universe gajillionaire Bill Ackman were relatively swiftly deleted.
BlueSky COO Rose Wang has promoted the feature that allows users to claim a handle tied to a domain they own as a verification mechanism.
But it is a mechanism that is relatively simple for impersonators to circumvent. Media analyst Kat Abughazaleh, who was in my sample of BlueSkyâs top 500 users, posts on BlueSky with the handle @katmabu.bsky.social. She confirmed to me that she does not own the handle @katmabu.com, which belongs to an impostor who went to the trouble of creating a custom domain in the hope of impersonating her.
This is not a sustainable setup for a platform that hopes to overcome Threads and X.
BlueSky CEO Jay Graber discussed the issue in a live stream on Monday, suggesting the platform might eventually ship its own verification feature while also allowing other services to provide their own.
In the interim, perhaps weâll have to follow Sky News presenter Sophy Ridgeâs lead and verify via a selfie-with-a-unique-handwritten-note.
MANUFACTURING EVIDENCE
On the eve of legislative elections in the Indian state of Maharashtra, the X account for the governing Bharatiya Janata Party posted four recordings purporting to prove that leaders of its rival parties were using ill-gotten bitcoin to fund their campaign. Fact-checkers at BOOM claim that at least three of these audio clips were AI-generated.
Despite several tools and experts consulted by the collaborative Misinformation Combat Alliance's Deepfakes Analysis Unit agreeing that the notes were likely synthetic, the BJPâs post remains live on X, with no visible Community Note.
WHATâS UP WITH WORLD WAR III?
A fake cover of The Economist purporting to dramatically warn about an upcoming nuclear showdown between Russia and the United States was widely shared last week across several social networks, including by pro-Kremlin Telegram accounts like Golos Mordora (h/t NewsGuard).
Through a quick reverse image search, I found at least 38 different tweets sharing the fake cover. The tweets, which had more than 200,000 cumulative views, were all accompanied by commentary condemning the Biden administrationâs choice to provide Ukraine with long-range missiles that can be used to hit targets inside Russia.
This narrative is not limited to accounts sharing fake Economist covers: last Friday, podcaster Joe Rogan made the same argument. Two days later, Ukrainian former boxer Wladimir Klitschko, who is also the twin brother of Kyivâs mayor, called Rogan out for ârepeating Russian propaganda.â
MODERN DAY HEARSTS
Speaking about geopolitical hot spots and questionable sources⊠AFP uncovered âa coordinated network of dozens of Facebook and YouTube channels that direct users to a bogus news websiteâ that runs sensational and misleading articles claiming the Philippines and China are on the brink of war. (There are dozens of fact checks about the maritime dispute between the two countries.)
The network âappears to use artificial intelligence (AI) to rapidly churn out unfounded claims for advertising revenueâ and has a combined following of more than 10 million people. AFP wasnât able to attribute the network to state actors, but did track down a Thailand-based manager of some of the pages, who said his team does "not have a deep understanding of these issues" but focuses âon the potential virality that the posts can attractâ in order to earn money through ad placements.
âHOW DO I SEND YOU $7,000?â
Aspiring immigrants into Canada have been targeted by a deepfake video of Max Chaudhary, a Toronto-based immigration lawyer. Chaudhary found out because someone called his office to arrange the payment of $7,000 that they had discussed with his impersonator via WhatsApp.
COZYCORE AI SLOP
Big thanks to my wife for flagging another niche of Instagram overcome with AI slop. Accounts with millions of followers are mass-posting AI-generated reels of cozycore interior design. In what Iâll admit is an eye-catching touch, they tend to animate the televisions in the image with short snippets from the classic cartoon Tom and Jerry.
Keep reading with a 7-day free trial
Subscribe to Faked Up to keep reading this post and get 7 days of free access to the full post archives.