š¤„ Faked Up #18
Americans turn to Search for debate fact-checking, Meta buries its AI labels, and Australia tries to regulate misinformation (again).
This newsletter is a ~6 minute read and includes 60 links.
THIS WEEK IN FAKES
The US Secretary of State accused Rossiya Segodnya and TV-Novosti of being a āde facto arm of Russiaās intelligence apparatus.ā Meta decided this was a good time to deplatform related accounts (most have been off YouTube for years). Congress is unlikely to pass a deepfake election law before November, but California did. Google struggles with AI paintings (see also: Hopper and Vermeer). OpenAI claims its new model āhallucinates less.ā A new app lets you build an entirely AI-generated following (at least it doesnāt do too poorly with conspiracy theories).
TOP STORIES
OZ TRIES AGAIN
The Australian government introduced three bills addressing harmful online behavior, including the Communications Legislation Amendment (Combatting Misinformation and Disinformation) Bill 2024 targeting digital deception on online platforms. This is the Labor governmentās second shot at it and an outgrowth of a voluntary industry code spearheaded by the former Conservative government. Elon Musk called Australian legislators fascists (š¤·āāļø), but even someone I respect like Mike Masnick expressed skepticism, while the advocacy group Reset.Tech called it āworryingly poor.ā
As I read it, the proposal struck me as relatively middle-of-the-road? The draft bill defines misinformation as false content that can create physical harm or serious harm to the electoral process, public health, protected groups and critical infrastructure.
It would boost the powers of ACMA so that the media regulator can require digital communications platforms1 to join or create a binding industry code on disinformation. Platforms would also be expected to keep records of disinformation countermeasures and publish risk assessments. And in case they fail to comply, thereās a list of remedial actions that range from formal warnings to financial penalties worth up to 5% of global turnover.
THEYāRE FACT-CHECKING THE PETS
Last week, Google Search queries related to the term āfact-checkingā were at a 10-year high both globally and in the US. In line with historical patterns, this peak was driven by an American political debate2.
Looking at the related topics adds some color into what people were eager to verify through Search. Besides the names of the two ABC moderators,3 Americans wanted to verify the baseless claims that Haitian immigrants in Ohio were eating cats, Harrisā claim that the Wharton School said Trumpās budget plan would āexplode the deficitā (hereās what they said) and former Virginia Governor Ralph Northamās oft misquoted views on late-term abortions.
Even some conservative commentators were upset that pet-eating made it from the online fringes to the debate stage. Neither of the Springfield-based individuals behind the early Facebook chain that spread this falsehood could substantiate their claim.
THE CHAT-CHECKER
New on Science4: A three-round conversation with ChatGPT reduced belief in a conspiracy theory of choice by an average of 20%. Thatāsā¦pretty good! The effect was observed across a wide range of conspiracy theories and lasted even two months after the intervention.
You can get a good sense of the study design in the graphic below. I am inclined to agree with the authorsā assessment that the reason the intervention was so successful is that the LLM could tailor its responses to the unique reasons each participant had to believe in the conspiracy theory. You can also play around with their AI fact-checker at debunkbot.com.
WHITHER, FACT CHECK LABELS?
A preprint by researchers in Luxembourg, France and Germany claims community notes on X reduced the spread of tweets they were attached to by up to 62 percent and doubled their chance of being deleted. The study also found that the labels typically came too late to affect the overall virality of the post. (This is a bit of a chicken-and-egg problem where a viral fake is more likely to be seen by people who can debunk it.)
This peer-reviewed paper on Misinformation Review is less encouraging, finding that the ādisputedā labels that (then) Twitter was appending to false claims of election fraud increased belief in the false claim by Trump supporters. Itās worth noting that this was a survey, rather than an analysis of platform data, and that no information beyond the label was provided.
CONCEALED TRANSPARENCY
Meta announced last week that it would be relocating the label for AI-modified images it launched in April. Instead of appearing above the image, āAI infoā is now relegated to the list of options you get when clicking the menu button.
Incidentally, Google announced on Tuesday that its own labeling efforts would be similarly buried behind a menu button in Searchās āAbout this Imageā feature. Google did not share mock-ups of this implementation and was vague about its launch date.
I am divided about this setup. As Iāve written in the past, Metaās labels are imprecise, failing to differentiate between minor edits and entirely AI-generated content. They are also infrequent, failing to capture large swaths of entirely AI-generated accounts created with tools that donāt adhere to industry standards.
Reducing the labelsā prominence may be a good interim measure to give engineers time to work on their precision and recall while still providing access to journalists and researchers who understand their limitations.
The challenge is that this is likely to become a permanent approach that ends up doing very little to help ordinary users ā who do not click on tiny three-dotted-buttons ā distinguish real photos from fake one.
DEPT. OF DID WE REALLY NEED THIS?
Google has not one but two AI products that transform a piece of writing into a podcast discussion. Illuminate is restricted to arXiv articles, presumably to limit abuse, but Notebook LLM lets you use pretty much any text.
I was curious to see how NotebookLLM coped with polarizing or false content, so I had Google generate a ādeep dive conversationā for four different urls.
Keep reading with a 7-day free trial
Subscribe to Faked Up to keep reading this post and get 7 days of free access to the full post archives.