☀️ TRENDING AI NEWS

  • 🏢 Pentagon locks in classified AI deals with OpenAI, Google, Nvidia, Microsoft, xAI, SpaceX, and Reflection - Anthropic notably excluded

  • 🤖 Meta acquires humanoid robotics startup Assured Robot Intelligence to accelerate its robot AI ambitions

  • 🚨 Dark-money nonprofit linked to OpenAI and a16z executives is paying influencers to stoke fears about Chinese AI

  • ⚡ NVIDIA Research demonstrates 1.8x rollout generation speedup via speculative decoding in NeMo RL at 8B scale

The Pentagon just quietly redrew the map of who it trusts with America's most sensitive AI work. Seven companies made the list. One very prominent name didn't.

🤓 AI Trivia

Which AI company was previously used by the US Pentagon for classified work but was dropped from its latest round of classified AI contracts?

  • 🏢 Mistral AI

  • 🏢 Anthropic

  • 🏢 Cohere

  • 🏢 Inflection AI

The answer is hiding near the bottom of today's newsletter... keep scrolling. 👇

🏢 Pentagon Picks Its AI Partners - And Leaves Anthropic Out in the Cold

On Friday, the Department of Defense announced it had struck classified AI agreements with seven companies: OpenAI, Google, Microsoft, Amazon Web Services, Nvidia, Elon Musk's xAI, and the startup Reflection. The deals allow these companies' tools to be deployed on classified networks, giving the military access to frontier AI for sensitive national security work.

The Supplier That Got Cut

The conspicuous absence is Anthropic. The Claude maker was previously used by the Pentagon for classified work, but the DoD recently declared it a "supply-chain risk" - reportedly after Anthropic pushed back on terms that would have allowed its models to be used for any "lawful" purpose without restriction. The other seven companies reportedly agreed to exactly those open-ended terms.

This is a significant moment for the AI industry. The line between "responsible AI" limits and government deployment requirements is being drawn in real time, and Anthropic found itself on the wrong side of it - at least for now. If you follow the ongoing tension between AI labs and military contracts, this is the clearest expression yet of those fault lines.

It's worth noting that Google employees have been pushing back internally on Pentagon AI work for weeks. That pressure apparently didn't stop Google from signing.

You can follow all the latest on military AI over at our military AI coverage page and our AI regulation tracker.

🤖 Meta Buys a Humanoid Robotics Startup - and It's Not Playing Around

Meta quietly acquired Assured Robot Intelligence, a humanoid robotics startup, to beef up the AI models that will power its future robots. The deal was confirmed by Meta on Friday, though financial terms weren't disclosed.

From Social Feed to Robot Brain

This is a meaningful signal about where Meta is pointing its long-term bets. The company has been investing heavily in physical AI - the kind that needs to understand 3D space, objects, and how to interact with the world, not just text or images. Assured Robot Intelligence's expertise is expected to feed directly into the AI models that will run on Meta's humanoid hardware.

The timing is interesting. Meta is also under scrutiny for running get-rich-quick ads through its Manus AI brand, which it acquired for $2 billion. This robotics acquisition points to the more serious side of Meta's AI agenda - one focused on long-horizon research rather than influencer side-hustle content.

For more on the broader robotics race, check out our robotics coverage.

🚨 A Dark-Money Campaign Is Paying Influencers to Fear-Monger About Chinese AI

A Wired investigation has uncovered a campaign by a nonprofit called "Build American AI" - linked to a super PAC funded by executives at OpenAI and Andreessen Horowitz - that has been paying TikTok and social media influencers to spread pro-American AI messaging and stoke fears about Chinese AI as a national security threat.

Influencers as Geopolitical Proxies

The campaign reportedly pays creators to produce content framing Chinese AI development as dangerous, without always disclosing the financial relationship to viewers. The funding trail connects back to some of the most powerful figures in the US AI industry - which raises uncomfortable questions about who benefits from public anxiety about China's AI capabilities.

This kind of influence operation is different from traditional lobbying. It's targeting everyday social media audiences with emotionally charged content, funded by the very companies that stand to gain from US AI policy favoring their products. The AI industry's proximity to national security narratives - and the money flowing to shape those narratives - is worth watching closely.

⚖️ Musk v. Altman Week One - Bombshells, Bad Moments, and a Key Admission

The first week of the Musk v. Altman trial wrapped up with Elon Musk spending the better part of three days on the witness stand. The broad strokes of his argument: Sam Altman and Greg Brockman deceived him into funding OpenAI, then converted the nonprofit into a profit-making machine, betraying its founding mission.

The Admission That Stood Out

The moment that generated the most attention wasn't about OpenAI's conversion to for-profit - it was Musk's acknowledgment that his own company, xAI, used OpenAI's models to help train Grok through a process called model distillation. Musk argued this is standard industry practice (it often is), but the optics of suing someone for misusing AI while your own company distilled their models is... not great.

MIT Technology Review's coverage of the week's testimony is particularly sharp - Musk warned AI could kill us all, had his own tweets and emails entered as evidence against him, and saw his finance guy Jared Birchall's testimony potentially create legal complications. By most accounts, it was not a good week for Musk's case.

For deeper context on OpenAI's history, visit our OpenAI coverage and Elon Musk tag page.

⚡ NVIDIA Finds a Way to Make RL Training Nearly Twice as Fast

NVIDIA Research has published new results showing that integrating speculative decoding directly into its NeMo RL framework achieves a 1.8x rollout generation speedup at the 8 billion parameter scale - and projects a 2.5x end-to-end speedup at 235 billion parameters.

Why Rollout Speed Is the Bottleneck Nobody Talks About

During reinforcement learning training, a huge chunk of compute time is spent on "rollout" - the process where a model generates responses that are then scored and used to update its weights. Speeding up rollouts without losing accuracy (this is described as "lossless") means you can run more training iterations in the same amount of time, or train larger models within existing compute budgets.

The projected 2.5x gain at 235B parameters is particularly significant. At that scale, training costs are enormous, so even modest efficiency improvements translate to real money and time savings. This is the kind of infrastructure research that doesn't make headlines but quietly shapes which labs can afford to train frontier models.

Thinking about compute costs and token efficiency? Our Token Calculator can help you run the numbers on your own projects.

🛠️ Quick Sponsor Note - Build a Site in 60 Seconds

If you've been thinking about spinning up a landing page or portfolio - especially with AI doing more of the heavy lifting on side projects - 60sec.site is worth a look. It's an AI website builder that gets you live in under a minute. No design skills needed. And for more of your daily AI fix, bookmark dailyinference.com - we publish every day.

🌎 Trivia Reveal

The answer is Anthropic! The Claude maker was previously contracted by the Pentagon for classified AI work, but was dropped from the latest round of deals after the DoD declared it a supply-chain risk - reportedly after Anthropic pushed back on open-ended usage terms. The other seven companies signed agreements allowing any lawful use of their technology.

💬 Quick Question

Today's Pentagon story raises a real tension: should AI companies set hard limits on how governments can use their models, even if it costs them contracts? Or is that naive in the current geopolitical climate? Hit reply and tell me where you land on this - I genuinely read every response and would love to hear your take.

That's it for today - see you tomorrow with more. Stay curious out there.

Keep Reading