For the last issue of 2024, I asked three of the smartest people I know to discuss overarching trends in digital deception.
Chinmayi Sharma is an Associate Professor at Fordham Law School. Her research and teaching focus on open internet governance, cybersecurity, artificial intelligence accountability, and more.
Claire Wardle is an Associate Professor in the Department of Communication at Cornell University. Her research focuses on user-generated content, verification and misinformation.
Mor Naaman is a Professor in the Information Science department on the Cornell Tech campus in New York City. Mor’s research group studies our information ecosystem and its challenges.
You can find Chinny, Claire and Mor on Bluesky.
I really enjoyed getting Chinny’s limpid review of the conflicting court decisions on platform transparency, Claire’s counter-intuitive perspective on the end of CrowdTangle, and Mor’s insightful assessment of verification as a tool for accountability.
If all this sounds wonderfully academic, it certainly was. But there were also zingers about the “we-don’t-give-a-damn-phase” of platform Trust & Safety life cycles, calls to move on from a “command and control” approach to AI labels, and exasperation at the state of deepfake regulation. Also, apparently comments are sexy?
At the very end of the podcast, I asked my guests to shine their crystal balls and make Yes/No predictions on 2025 scenarios including Elon Musk’s acquisition of Truth Social (improbable), state influence operations targeting American lefties on BlueSky (likely) and the future of Section 230 (unvaried).
I came out of this conversation with a renewed commitment to covering this space with as much nuance as I can, and I will be thinking about changes to make to Faked Up in that regard. (I also came out of it with a renewed respect for sound editors; I must have spent 6 hours on Monday night alone moving clips around. At one point, the tool flat out refused to let Mor introduce himself).
There will be no Faked Up on December 25 or January 1. Expect to see an impact report for 2024 early in the new year and the usual newsletter back on January 8. Happy holidays to those taking some time off!
Related reading
Here are some links related to topics that came up in the podcast.
On tracking and combating misinformation:
Labor dumps misinformation bill after Senate unites against it
We Don't Need Google to Help "Reimagine" Election Misinformation
9th Circuit: Provisions of California’s content-moderation law violate First Amendment
Justices side with Biden over government’s influence on social media content moderation
What Do People Want? Views on Platforms and the Digital Public Sphere in Eight Countries
On AI deception
Report – In Deep Trouble: Surfacing Tech-Powered Sexual Harassment in K-12 Schools
BJP Posts Fake AI Audio Clips of Sule, Patole, Alleges Poll Fraud
Israel Secretly Targets U.S. Lawmakers With Influence Campaign on Gaza
Governor Newsom signs bills to combat deepfake election content
On impersonation and verification
Airline held liable for its chatbot giving passenger bad advice - what this means for travellers
Are A.I. Clones the Future of Dating? I Tried Them for Myself.
HEADLINES
I physically could not spend 8 hours editing the podcast and also write a full newsletter. But here are top stories on the Faked Up beat this week for the sickos who listened to 45 minutes of podcast and still want MORE #content.
The EU has opened a formal investigation into TikTok’s role in alleged election interference in Romania. The Onion’s bid for InfoWars was rejected after all. At least 26 members of the US Congress have been targeted by synthetic NCII. OnlyFans “chatters” are now AI-generated. Luigi Mangione conspiracy theories abound. A deepfake audio clip of Bashar al-Assad apologizing to the Syrian people went viral on X. A spam network pushing AI-plagiarized spam content is active on BlueSky. Nigerian police arrested almost 800 individuals involved in a CryptoRom scam. The deepfake porn law passed by the US Senate may become law this week. The BBC has complained about Apple Intelligence misrepresenting its news articles. WeChat also suffers from deepfake scam ads. Spain is considering a law requiring corrections on posts by influencers with over 100,000 followers. The Scottish Parliament’s feed may be vulnerable to live deepfake attacks.
Share this post