☀️ TRENDING AI NEWS
🏢 OpenAI closes $122B funding round at an $852B valuation - retail investors got a slice too
🤖 ChatGPT is now accessible through Apple CarPlay on iOS 26.4
⚠️ Claude Code accidentally leaked 512,000+ lines of source code - and it revealed some wild future features
🛠️ Google releases Veo 3.1 Lite, a cheaper and faster video generation model for developers
Three hundred and fifty million parameters, trained on 28 trillion tokens. That's Liquid AI's new model, and it's a direct challenge to the assumption that bigger always means smarter. But before we get there - OpenAI just closed the largest private funding round in tech history, ChatGPT showed up in your car dashboard, and Anthropic accidentally handed the internet a peek at its roadmap. Let's get into it.
🤓 AI Trivia
OpenAI's latest funding round is one of the largest in private tech history. But which company led the round?
🏦 Microsoft
📦 Amazon
🔵 SoftBank
🟢 Nvidia
The answer is hiding near the bottom of today's newsletter... keep scrolling. 👇
🏢 OpenAI Just Closed a $122B Funding Round - and Let Regular People In
OpenAI has officially closed a $122 billion funding round, pushing its valuation to $852 billion. The round was led by Amazon, Nvidia, and SoftBank, and it cements OpenAI as one of the most valuable private companies on Earth. For context, it now generates $2 billion in revenue per month.
Retail Investors Got a Seat at the Table
Here's the detail that stood out: $3 billion of the round came from retail investors - ordinary people, not institutions. That's unusual at this scale, and it's widely read as OpenAI warming up its investor base before an IPO. The company still isn't profitable, but at $2B/month in revenue, the path there is at least visible now.
Whether this valuation holds post-IPO is the big question. The AI market is moving fast, competition is intensifying, and the cost of training and running frontier models isn't shrinking as quickly as anyone hoped. But for now, the money is in.
🤖 ChatGPT Is Now in Your Car Dashboard
If you updated to iOS 26.4, ChatGPT is now accessible directly from your Apple CarPlay interface. Apple added support for "voice-based conversational apps" in this update, opening the door for AI chatbots to plug into the in-car platform.
What It Does (and Doesn't Do) Behind the Wheel
When using ChatGPT through CarPlay, the app doesn't store your conversations - a deliberate privacy decision for the driving context. You interact entirely by voice, which is exactly how it should work when your eyes need to stay on the road.
The practical upside is real: hands-free AI assistance for drafting messages, answering questions, brainstorming, or just thinking out loud during a commute. It's a small integration but a meaningful one - the car dashboard is one of the last screens where AI assistants haven't fully landed yet. That's changing now.
(Building something new and need a fast web presence? 60sec.site uses AI to spin up a clean, professional website in under a minute - worth bookmarking.)
⚠️ Anthropic Accidentally Leaked Claude Code's Roadmap - Including a Virtual Pet
This one is genuinely strange. After Anthropic released the Claude Code 2.1.88 update, users noticed it contained a source map file with the full TypeScript codebase inside - over 512,000 lines of code exposed by accident.
A Tamagotchi and an Always-On Agent in the Wings
The leaked code reportedly hints at two notable upcoming features: a Tamagotchi-style "pet" companion built into the coding tool, and an always-on background agent that can work without you actively prompting it. Neither feature is live yet, but the existence of both in the codebase tells you a lot about where Anthropic thinks Claude Code is going - toward something more like a persistent collaborator than a tool you pick up and put down.
The accidental disclosure is the second notable incident at Anthropic this week, which has had a rough few days on the operational side. The company hasn't officially commented on the features revealed in the leak.
🛠️ Google's Veo 3.1 Lite Targets the Price Problem in AI Video
AI video generation has been getting technically impressive for a while now. The problem has always been cost - generating even a few seconds of high-quality video is expensive enough to make production-scale use impractical. Google is directly addressing that with Veo 3.1 Lite, a new tier of its generative video model available through the Gemini API.
Speed and Cost Over Maximum Fidelity
Veo 3.1 Lite is positioned as the lower-cost, higher-speed option for developers who need to generate video at scale - think app prototypes, automated content pipelines, or products where "good enough, fast, cheap" beats "perfect but slow." It's not the top-of-the-line model, but that's kind of the point.
The move signals something broader: the generative video race is shifting from "who can make the most impressive clip" to "who can make this economically viable at volume." For developers building on top of video AI, this is the tier that actually makes the math work.
⚡ Liquid AI's 350M Model Was Trained on 28 Trillion Tokens - and It Shows
The conventional wisdom in AI has been: more parameters, more intelligence. Liquid AI is pushing back on that with LFM2.5-350M - a 350 million parameter model trained on 28 trillion tokens (scaled up from 10 trillion in earlier training) with large-scale reinforcement learning applied on top.
28 Trillion Tokens, 350M Parameters - Intelligence Density as a Strategy
The idea is that compute-efficient models trained on vastly more data can punch above their weight class - competing with much larger models while being far cheaper to run and deploy. For developers building on-device or latency-sensitive applications, a genuinely capable 350M model is a big deal.
This is a recurring theme worth paying attention to: the frontier is still chasing scale, but a quieter race is happening at the efficiency end. If you're interested in more on language model efficiency and benchmarks, we cover it regularly. The results from small, well-trained models are getting harder to dismiss.
🔬 Hugging Face Releases TRL v1.0 - Post-Training Just Got a Proper Framework
If you fine-tune models, this one's for you. Hugging Face has officially released TRL v1.0 - the Transformer Reinforcement Learning library - marking its transition from a research experiment to a stable, production-ready framework.
SFT, Reward Modeling, DPO, and GRPO - All in One Stack
Version 1.0 unifies the full post-training pipeline: Supervised Fine-Tuning (SFT), Reward Modeling, Direct Preference Optimization (DPO), and GRPO into a single coherent workflow. Previously, teams often cobbled together separate tools for each stage. Now there's a standardized, versioned path through the whole process.
For teams building custom models or aligning base models to specific tasks, this removes a significant amount of glue code and guesswork. The "production-ready" label matters here - it means the API is stable and won't break between updates, which has been a real pain point for anyone who's built on earlier versions of TRL. A solid release for the developer tools ecosystem.
🌎 Trivia Reveal
The answer is Amazon! Amazon led OpenAI's $122B round, alongside Nvidia and SoftBank. Microsoft, despite its longstanding partnership with OpenAI, was notably not listed as a lead investor in this round - an interesting detail given how central that relationship has been to OpenAI's infrastructure story.
💬 Quick Question
With ChatGPT now in CarPlay and always-on agents apparently on Anthropic's roadmap - how much AI access in your daily environment feels like too much? Is there a context where you'd actively not want an AI assistant available? Hit reply and let me know - I genuinely read every response, and this one I'm curious about.
That's it for today - thanks for reading. Stay sharp out there, and we'll see you tomorrow with more from the front lines of AI. For the full archive of past issues, head to dailyinference.com.