☀️ TRENDING AI NEWS
🚀 SpaceX secures option to acquire AI coding startup Cursor for $60B - or partner for $10B
⚠️ Anthropic's restricted Mythos cybersecurity model accessed by unauthorized users via online forum
🛠️ OpenAI launches ChatGPT Images 2.0 with web search and improved text rendering
⚠️ Sullivan & Cromwell apologizes to federal judge after AI hallucinations found in major legal filing
Three major stories landed yesterday that paint a pretty clear picture of where AI is right now: a $60B bet on coding tools, a dangerous model falling into the wrong hands, and an image generator that finally got good at reading the room. Let's get into it.
🤓 AI Trivia
Anthropic's restricted Mythos model made headlines this week - but what category of AI risk does Anthropic use to classify models like Mythos that could pose threats to critical infrastructure?
🔴 ASL-1 (Minimal Risk)
🔴 ASL-2 (Early Capability)
🔴 ASL-3 (Serious Uplift Potential)
🔴 ASL-4 (Catastrophic Risk)
The answer is hiding near the bottom of today's newsletter... keep scrolling. 👇
🚀 SpaceX Has an Option to Buy Cursor for $60 Billion
This is a genuinely strange deal. SpaceX - Elon Musk's rocket company - has announced it has secured an option to either acquire Cursor, the AI coding platform, for $60 billion later this year, or pay $10 billion for a partnership arrangement. As a point of reference, Cursor reportedly had a $50B valuation just weeks ago.
Neither Side Has Models That Can Match the Big Two
TechCrunch's analysis puts it well: this deal reveals a weakness at both companies. Cursor relies on Claude and GPT-4 under the hood - it doesn't have its own proprietary model. And xAI's Grok hasn't demonstrated it can compete with Anthropic or OpenAI at the frontier. A combination would shore up market position, but the underlying model gap doesn't go away overnight.
The $10B partnership fee alone is eye-watering for a company that competes directly with developer tools from Anthropic and OpenAI. Musk is clearly trying to buy relevance in a market where he's currently an outsider.
⚠️ Anthropic's Mythos Was Accessed by People Who Shouldn't Have It
This is the story that's going to keep Anthropic executives up at night. Bloomberg reported yesterday that a "small group of unauthorized users" gained access to Mythos - the company's restricted cybersecurity AI that Anthropic itself warned could pose risks to national security, economies, and public safety.
How It Happened - and What Anthropic Is Saying
According to the report, members of a private online forum got in via a mix of tactics, with help from someone identified as a third-party contractor for Anthropic. Anthropic told TechCrunch it's investigating but says there's "no evidence" its core systems were compromised.
The timing is particularly uncomfortable. Sam Altman publicly called Mythos "fear-based marketing" on a podcast this week, downplaying its risks. And separately, Mozilla used the model through proper channels to find 271 bugs in Firefox - demonstrating the model genuinely does what Anthropic claims. This isn't theoretical capability.
The bigger picture: if a model is powerful enough to require restricted access and government-level vetting, keeping it contained is a real operational challenge - not just a PR one. We've been covering the cybersecurity angle of Mythos since launch, and this incident suggests the concerns weren't overblown.
🛠️ ChatGPT Images 2.0 Can Now Search the Web to Build Your Image
OpenAI quietly dropped a significant upgrade to its image generation this week. ChatGPT Images 2.0 now features "thinking capabilities" - meaning it can search the web before generating an image to pull in relevant context. Ask it to create a promotional poster for a real event and it'll actually look up the event details first.
Text Rendering Finally Crosses the Usefulness Threshold
The other big change is text rendering. Wired's testing confirms the model is now genuinely good at placing accurate, readable text inside images - something that has historically been a glaring weakness across all AI image generators. It still struggles with languages other than English, but for English-language creative work, this is a meaningful leap.
The model can also generate multiple images from a single prompt and follows complex instructions more reliably. If you've been using AI image generation tools for design work or content creation, this update is worth testing today - and if you're building a landing page around it, tools like 60sec.site let you spin up an AI-generated site in under a minute to show off your creations.
⚠️ A Top Wall Street Law Firm Just Got Caught Using AI to Write Fake Case Citations
Sullivan & Cromwell - one of the most prestigious law firms in the world - has apologized to a New York federal judge after a major filing in a high-profile case contained errors from AI hallucinations. The firm's global co-head of litigation personally addressed the court over fabricated citations in documents related to the Prince Group case.
The Stakes Are Different in a Courtroom
This isn't the first time AI hallucinations have shown up in legal filings - there have been several high-profile cases of lawyers citing non-existent cases generated by ChatGPT. But when it's happening at Sullivan & Cromwell, which advises the biggest companies and governments in the world, it signals this isn't a problem isolated to solo practitioners cutting corners.
The broader implication: even sophisticated, well-resourced legal teams are letting AI outputs slip through without adequate verification. For anyone using AI in high-stakes professional contexts, the human review layer isn't optional.
🛠️ Meta Is Recording Employee Keystrokes to Train Its AI
Meta has built an internal tool that converts employee mouse movements and keyboard clicks into training data for its AI models. The company is essentially turning its own workforce's daily computer interactions into a behavioral dataset.
Employees as Involuntary Data Labelers
This raises obvious questions about employee privacy and consent. The data being collected - how people actually use software, what they click on, how they navigate tasks - is valuable precisely because it captures real human decision-making at the interface level. That's exactly what's hard to replicate with synthetic data.
It's worth keeping in mind: Meta employs tens of thousands of people across engineering, design, operations, and research. The scale of behavioral data this could generate is enormous. Whether employees were given meaningful notice or choice isn't clear from the reporting.
🌎 Trivia Reveal
The answer is ASL-3 (Serious Uplift Potential)! Anthropic's AI Safety Level framework classifies models at ASL-3 when they could provide serious assistance to those seeking to cause significant harm - such as attacks on critical infrastructure. Mythos was restricted precisely because it met this threshold, making yesterday's unauthorized access story even more significant.
💬 Quick Question
The Mythos breach story has me thinking: when a company restricts a powerful AI model "for safety," do you actually trust that it stays restricted? Hit reply and tell me - I read every response and genuinely want to know where your trust level sits right now.
That's it for today. A lot happened this week and we're only halfway through it - check out the Daily Inference archive if you want to catch up on anything you missed. See you tomorrow.