Hello and welcome to Faked Up, a newsletter about digital deception, dreck and disinformation.
I’m Alexios Mantzarlis and I’ve been sending newsletters since 2015.
I am currently the Director of the Security, Trust & Safety Initiative at Cornell Tech but Faked Up is a personal project. It is informed by a decade+ of working to improve online information quality — as a fact-checker, researcher and tech worker.
Faked Up has been featured, among other places, on 404 Media, Engadget, NBC News, The Washington Post and Wired. My work led Meta and Google to take down hundreds of ads and several apps for AI undresser services (FU#12, FU#16, FU#17) and was cited in a letter by a bipartisan group of 26 members of Congress to Google CEO Sundar Pichai on the topic. Faked Up also possibly led OpenAI to terminate a political chatbot (FU#7).
Subscribe to Faked Up if you care about making our public forums resilient to fakery.
Answers to all some of your questions
What’s in Faked Up? Every week, I summarize, expand on and contextualize 5-10 top stories on the fakeness beat. I also share a dozen other headlines that caught my eye.
I’m particularly interested in tracking how information quality is being degraded online and what tech platforms are (not) doing about it, but I also cover impersonation, misinformation and deceitful or harmful spammy content regardless of how it was created and where it was disseminated.
My aspiration is to help you distinguish the signal from the noise, the snake oil from the real solutions.
I point to must-read pieces from great tech reporters and researchers, mostly from English-speaking organizations. I add insights, explain why a story matters and flag when I think the tech industry isn’t doing enough to address online harms.
The newsletter is typically 1000-2000 words long and takes 5-8 minutes to read.
Consider it the TL;DR you’d get from your too-online friend about all the ways the internet was f**ed up this week.
Why should I read what you have to say? Kara Swisher once called a piece I wrote “smart.” While that was basically when my career peaked, I have also had a range of roles that touched on big content moderation decisions and misinformation.
As the founding director of the International Fact-Checking Network, I was instrumental to Facebook’s third-party fact-checking program. As a product policy manager for Google, I worked to block recommendations to Russian state media in the context of the invasion of Ukraine. Before leaving Google, I set up a team dedicated to the adversarial red teaming of its generative AI tools.
What happens when you make mistakes? I do my best to fact-check the stuff I write, but I also recognize that I won’t catch everything. Here’s my corrections policy.
Are there going to be hot takes? Yes. I’m not an investigative reporter. I am a practitioner and analyst with strong opinions about this topic area. At the same time, I’ve worked at a tech platform and understand that content moderation decisions that comes without tradeoffs are rare. I’ll try to write with that in mind.
What are your biases? Great question, imaginary asker in my head! Here are some of my priors when it comes to tech policy. I share these not because I expect you will agree with them, but so that you know which beliefs will color my analysis:
Freedom of speech does not equal freedom of reach. Large platforms can and should impose rules that discourage abusive content and encourage healthy online ecosystems — or face consequences.
Bad laws are worse than no laws. Misguided legislation from less than well-intentioned legislators can further degrade online information spaces.
Context and friction are often better than removal. Especially when it comes to emerging topics, providing background from reputable sources and slowing down virality is usually better than deleting content that might be misinformation.
Expertise matters. The way certain organizations collect and share information mean they should be deferred to as a reputable source. Even if they occasionally make mistakes.
Technology isn’t a silver bullet. There is no social harm that can be fixed exclusively by a technological solution.
Tech policy conversation is overly Americanocentric. Despite being global in footprint, many of the world’s large tech platforms are headquartered in California and respond first and foremost to what they read, see and understand. As a non-American, I’ll try to be global in outlook but recognize that a whole lot of great research and coverage happens in the United States. Keep me honest!
Finally, here are some books that have informed my understanding of my beat:
Why should I pay for Faked Up? I don’t know the answer for you, but I do know the answer for me. This newsletter is a side project and is not supported by my current, past or future employers. If I can make ~20% of my income this way, it means I can avoid hustling for a soul-crushing consulting gig to complement the income I lost by leaving the private sector.
If you work for a company that is worth billions of dollars…EXPENSE THIS NEWSLETTER!
If you can’t afford the $6/month, shoot me a note at mantzarlis@protonmail.com and I’ll comp it for six months, no questions asked.
I work in tech and am worried about misinformation. Can I tell you stuff confidentially? Yes please! Reach out to mantzarlis@protonmail.com — I know how you feel and I will treat your concerns with respect.