Hi, folks.
This newsletter is brought to you by aloe vera shots and nudged icebergs. It is a ~6-minute read and contains 45 links
Top Stories
COPYCOP’S PROMPTS
The cybersecurity company Recorded Future claims to have identified a network of 12 websites using generative AI to spread anti-Western narratives. The network, likely Russia-aligned, used an LLM tool to rewrite articles from a variety of media outlets using “cynical tone and biased context.” The websites appear to be now offline but have been previously amplified by the Doppelgänger network.
CHATGPT’S MISINFO CARVEOUT
OpenAI published a Model Spec that specifies the “desired behavior” for ChatGPT and its other public tools.
The document spells out a hierarchy of Objectives > Rules > Defaults. Roughly speaking, “objectives” are the tool’s use cases, “rules” are hard-wired instructions to avoid or mandate certain responses, and “defaults” are objective-consistent preset behaviors users can opt out of.
OpenAI places factual accuracy on this lowest rung. I believe that’s why it’s relatively easy to get ChatGPT to return false claims that aloe vera can cure cancer:
Keep reading with a 7-day free trial
Subscribe to Faked Up to keep reading this post and get 7 days of free access to the full post archives.