š¤„ Faked Up #17
Brazil's disinformation takedowns lack transparency, RT covertly paid right-wing influencers $400K/month, and Meta ran hundreds more ads for deepfake nudifiers
This newsletter is a ~8 minute read and includes 81 links. I ā¤ļø to hear from readers: please leave comments or reach out via email.
Last weekās Faked Up analysis of AI nudifier ads on Meta was featured on The Washington Post! More about that further down.
THIS WEEK IN FAKES
The US Department of Justice seized 32 domains associated with the Doppelganger influence operation. Googleās Reimagine is a ticking misinformation bomb (see also FU#15). An Australian MP made deepfakes of the countryās prime minister to prove a point. Nikki Haley released a grudge. Police in Springfield, Ohio, said there was no evidence to back up a Facebook post claiming immigrants were eating local pets (to little avail. Very little avail). Musk may get summoned by British MPs about hateful misinformation. Plus: can you beat my 9/10 score at this deepfake-spotting quiz?
TOP STORIES
DISINFORMATION AND BRAZILāS X-IT
Brazilās ban on X has been widely framed as a disinformation issue.
In the immediate sense, thatās not quite right. As the think tank InternetLab put it in an emailed brief, the ban follows Xās non compliance with a legal order related to the āintimidation and exposure of law enforcement officersā connected to the Supreme Courtās inquiry into the Jan. 8 attacks.
At the same time, the ban is the end result of five years of judicial actions targeting disinformation about the Supreme Court and the electoral process.
To try and understand how online disinformation removals work in Brazil, I consulted fact-checkers Cristina TardƔguila and Tai Nalon, tech law scholars Carlos Affonso de Souza and Francisco Brito Cruz, and law student Vinicius Aquini Goncalves.
The legal grounds
The 2014 Marco Civil da Internet, Brazilās Internet Bill of Rights, makes internet providers liable for harmful content on their platforms if they donāt remove it following a court order. As Souza told me, the law is ānot a guidance ā¦ It is binding, so judges need to apply that.ā At the same time, he thinks it is āin dire need of some updates, especially concerning issues of content moderation.ā
A legislative attempt to provide this update came in 2020, with the āfake newsā bill (PL/2630). The draft bill would have defined several terms, including inauthentic accounts, fact-checking, and disinformation (content that is āverifiable, unequivocally false or misleading, out of context, manipulated or forged, with the potential to cause individual or collective damageā).
For a variety of reasons, including real flaws in scope, the fake news bill never passed, leaving digital disinformation undefined and unregulated by Brazilian legislators.
The judicial branch filled this void. In 2019, the Supreme Court opened an inquiry into online false news about the institution and its members. According to legal scholars Emilio Peluso Neder Meyer and Thomas Bustamante, this relied on an āunusual interpretationā of the courtās internal rules whereby because it can āinvestigate crimes committed inside the tribunalās facilities,ā it can investigate crimes on the internet.
Even as this inquiry pursued what several viewed as legitimate harms, it has also been described to me as āhighly unusualā and āvery heterodox.ā
Another key element of the online anti-disinformation puzzle is the October 2022 resolution by the Supreme Electoral Court (TSE). This gave the TSE president the unilateral power to request takedown of disinformation identical to that previously removed under previous court orders and the ability to fine platforms ~$20K for every hour the content stays online after the second hour of notification.
In February of this year ā with local elections coming up in October ā the TSE also banned the use of deepfakes in political campaigns.
The takedowns
The first thing to note is that takedown requests made by the Supreme Court and the TSE are confidential. āThe press canāt see anything; itās very opaque,ā TardĆ”guila says.
What little information we have on individual takedowns is what is being shared by recipients of the orders, like X. Brito Cruz says even that information is incomplete because it doesnāt contain the full reasoning of the court.
In addition, decisions are often taken at the account level rather than at the individual URL level (Aos Fatos published an overview of some of the targeted X accounts here.)
Back in April, Lupa reviewed social media content related to 37 TSE takedown requests released by X for a report by the US House Judiciary Committee. To date, I think it is the most comprehensive independent analysis of the merit of individual takedowns that is available. Together with what is being selectively disclosed in the Alexandre Files, this gives us a very partial picture of the content: Debunked theories about voter fraud, misleading attacks against President Lula and high-voltage criticism of the Supreme Court. Clearly, not all of this is disinformation; but then again, not all of it was actioned on those grounds.
Other transparency reports by targeted platforms provide a sense of the scale of requests.
TikTok claims to have removed 222 links in response to 90 court orders in 2022, the year of the most recent presidential election. This pales in comparison with the 66,000 videos the platform claims to have deleted of its own volition for violating its electoral disinformation policies. However, without data on the relative reach of these two sets, these figures are not quite fair to compare.
Googleās transparency report is also helpful in that it clusters takedowns by reason. Electoral law was the grounds for 36% of removal requests in the six months to December 2022; that was figure was just 3% in H2 2023.
Looking at the data by number of items removed, the overall share of electoral takedowns is reduced but nontrivial ā Google reported 1,043 items removed in the second half of 2022. But given that some disinformation requests may fall under the defamation category, that, too, is an incomplete picture.
Aos Fatos, for one, is trying to compensate by tracking disinformation and AI-related keywords in judicial decisions to track how the deepfake ban is being applied by regional electoral courts (across all platforms and the open web, not just X). Nalon says they are building up an automated system with a view to better monitoring the 2026 presidential election.
But at least at this stage, the information shared by Brazilās courts and gleaned from platform transparency reports is scarce and scattered. This makes it very hard to make an honest assessment of the scale and fairness of Brazilās anti-disinformation decisions.
As Brito Cruz told me, āthe lack of transparency is a concern for all of us.ā
RT WAS NOT IN IT FOR THE ROI
The US Department of Justice claims that Konstantyin Kalashnikov and Elena Afanasyieva, two employees of Russian state-controlled media outlet RT, channeled nearly $10 million to ācovertly finance and direct a Tennessee-based online content creation company.ā The company has been identified as Tenet Media, run by far-right Canadian influencer Lauren Chen and her husband.
In turn, the indictment alleges, Tenet paid hundreds of thousands of dollars to sign American right-wing influencers including Benny Johnson, Dave Rubin and Tim Pool (each claimed victimhood).
The indictment is a riveting read and CBS, NBC and WaPo have done a great job covering the fallout. But if you take only three things out of it, let it be these:
The DOJ documents indicate that RT wasnāt looking for a financial return on investment. Before signing them on, Chen warned Kalashnikov that Pool and Rubin āwould not be profitable to employā but the RT employee greenlit the proposal anyway (Pool has since claimed that $100,000 per video is āmarket value.ā). Still, the influencers bought the RT operation an audience: in less than a year, Tenet Media had collected 330K subscribers and 16M views on YouTube.1
The content was probably in line with what the influencers were posting already ā Wired scraped Tenetās videos before YouTube took them down and analyzed frequent terms. But at least once, one of the influencers appears to have accepted an editorial recommendation to cover the Crocus City Hall terrorist attack āfrom the Ukraine-US angleā.
Not a lot of due diligence appears to have gone into these deals. Two of the right-wing influencers were content with the CV below as evidence of the existence of a well-heeled funder behind Tenet Media who could afford their honoraria. No other online footprint was available for Eduard Grigoriann ā and even Tenetās founders kept misspelling his surname in communications.
The indictment also says that Tenet Media was one of āmultiple RT covert distribution channels in the United States.ā It remains to be seen whether the DOJ will reveal more active deceptive creators or whether the others have already been identified by social networks.
CRUSHMATE CRUSHING IT ON META
Last week, I wrote about several āAI undressersā running more than 200 ads on Metaās platforms.
The good news is that Meta removed most of the ads following my article and Google blocked one of the related apps (Apple does not return my emails).
The bad news is that this weekend one of the undressers was still able to run ads from a range of other Facebook and Instagram pages. On Sunday, I collected 377 ads from 16 pages tied to this network.2 All of them pointed to the same service, called Crushmate. I reached out to Meta for comment and by Tuesday night they had taken down 15 of the 16 pages. A spokesperson told me that āweāre continuing to remove the ads and take action against the accounts breaking our rules.ā
For as long as they lasted, these ads appeared to be paying off for Crushmate.
According to SimilarWeb, the main site in the network, crushmate[.]club, got 80% of its 130K visitors from social media in August.
The entirety of that social media traffic came from Facebook and Instagram:
Keep reading with a 7-day free trial
Subscribe to Faked Up to keep reading this post and get 7 days of free access to the full post archives.