🤖 Daily Inference

Tuesday, March 3, 2026

Good morning! It's a big news day - and honestly, a complicated one. The AI-and-military story that's been simmering for weeks just got a lot more intense: Anthropic's Claude was reportedly used in U.S. strikes on Iran, despite the Trump administration's ban on its use. Meanwhile, OpenAI is doubling down on its own Pentagon relationship - and Claude is now the No. 1 app in the App Store because of all the attention. We've also got Google's jaw-dropping speed breakthrough in AI retrieval, and a growing global conversation about what all these data centres are actually costing us environmentally. Let's get into it.

⚠️ The US Military Reportedly Used Claude in Iran Strikes - Despite a Presidential Ban

This is the story dominating AI circles right now. According to The Guardian, the U.S. military reportedly used Anthropic's Claude AI model during strikes on Iran - even though the Trump administration had issued a ban on its use in military operations. That's a remarkable and deeply troubling development, raising immediate questions about oversight, accountability, and just how much control governments actually have over AI tools once they're embedded in institutional workflows.

The story puts Anthropic in an extraordinarily difficult position. The company has long positioned itself as the "safe" AI lab - one that prioritises careful deployment and ethical guardrails above all else. Anthropic has previously pushed back against Pentagon use of its models, and that principled stance is a core part of its brand identity. But if Claude ended up being used in lethal military operations anyway, it raises the uncomfortable question: can any AI company truly control how its technology is used once it's out in the world?

This isn't just a story about Anthropic - it's a story about the gap between AI policy and AI reality. For more on the ethical questions surrounding military AI and governance, we've been tracking this topic closely.

🏢 OpenAI Reveals Details of Its Pentagon Agreement

While Anthropic's Pentagon entanglement is involuntary and unwelcome, OpenAI's relationship with the U.S. Department of Defense is very much intentional. Sam Altman announced a Pentagon deal with what he described as "technical safeguards," and now OpenAI has shared more details about exactly what that agreement entails, according to TechCrunch.

The disclosure comes at a charged moment - with Claude's alleged military use in the headlines, pressure is mounting on all major AI labs to be transparent about their government partnerships. OpenAI's decision to reveal more specifics appears partly strategic: by proactively sharing details about its safeguards framework, the company is positioning itself as the responsible actor in the military AI space, one that engages openly rather than having its tools used covertly.

The broader trend is undeniable - AI companies are being drawn deeper into national security infrastructure, whether they choose it or not. The key question going forward is whether "technical safeguards" are sufficient guardrails, or whether the deployment of AI in high-stakes military contexts requires something far more robust: legislative frameworks, international agreements, and genuine democratic oversight. We've been following the AI in government beat closely if you want more context.

📈 Anthropic's Claude Hits No. 1 in the App Store Amid Pentagon Controversy

Here's an irony you couldn't script: in the wake of the controversy over Claude's alleged use in military strikes, Anthropic's Claude app surged to the No. 1 spot in the App Store, according to TechCrunch. The company, which had previously been locked in a very public dispute with the Pentagon over the use of its AI, suddenly found itself at the top of the charts - a strange kind of viral moment for an AI safety company.

The spike in downloads reflects something important about public psychology around AI right now. When Anthropic pushed back against military use of Claude - positioning itself as a company with genuine ethical lines it won't cross - that narrative resonated with consumers. People are increasingly paying attention to which AI companies seem to have principles, and they're voting with their downloads.

Of course, the App Store bump doesn't resolve the deeper tension Anthropic faces. As TechCrunch noted in a sharp analysis piece, the company has built a trap for itself: its safety-first branding is a genuine competitive advantage, but it also creates enormous pressure every time the real world doesn't cooperate with its ideals. The AI controversies tag on our site has the full arc of this story if you want to catch up.

⚡ Google AI's STATIC Framework Delivers 948x Faster Constrained Decoding for LLMs

Pivoting from ethics to engineering - and it's a big one. Google AI has introduced a new framework called STATIC, a sparse matrix approach that delivers a reported 948x faster constrained decoding for large language model-based generative retrieval. That's not a typo - nearly a thousand times faster for a specific but critical task in how LLMs search and retrieve structured information.

To understand why this matters, a quick explainer: constrained decoding is the process by which an LLM generates outputs that conform to a specific structure - think database queries, JSON outputs, or document identifiers. It's a foundational capability for enterprise AI systems that need to interact with structured data. The problem is that constrained decoding has historically been a bottleneck, slowing down generative retrieval pipelines significantly.

STATIC addresses this with a sparse matrix representation that dramatically reduces the computational overhead of enforcing those constraints during generation. A nearly 1,000x speed improvement isn't incremental - it's the kind of breakthrough that could unlock entirely new applications for LLM-based search and retrieval at enterprise scale. For anyone building AI infrastructure or working in enterprise AI, this is one to bookmark.

🌍 Data Centres Are Under the Microscope - From the UK to Australia

The AI boom has a physical footprint, and the world is starting to demand answers about it. Two separate Guardian investigations published this week highlight the growing scrutiny on data centre infrastructure - one focused on the UK, the other on Australia - and together they paint a picture of an industry that has expanded faster than the environmental frameworks designed to govern it.

In the UK, data centre developers are facing calls to disclose the effect of their facilities on net emissions. The concern is straightforward: as AI models grow larger and inference demands spike, data centres are consuming more electricity than ever. Without mandatory disclosure requirements, it's nearly impossible for policymakers - or the public - to accurately account for AI's true carbon cost. The push for transparency is gaining momentum, with campaigners arguing that voluntary reporting simply isn't sufficient given the scale of expansion underway.

In Australia, The Guardian's analysis goes even further, examining not just emissions but also power prices and water supply - two resources that data centres consume at enormous scale. Cooling systems for large AI clusters require significant water, and in a country already managing climate-driven water stress, this is a legitimate concern. The environmental concerns around AI infrastructure aren't going away - if anything, they're intensifying as the buildout accelerates.

🛠️ Alibaba Open-Sources CoPaw: A Multi-Agent Workstation for Developers

On the open-source front, Alibaba's team has released CoPaw, a high-performance personal agent workstation designed to help developers scale multi-channel AI workflows and memory. It's a significant contribution to the open-source AI ecosystem, targeting the increasingly complex challenge of managing multiple AI agents working in parallel across different tasks and data sources.

CoPaw is positioned as a developer-first tool - a workstation environment where you can orchestrate AI agents, manage persistent memory across sessions, and run multi-channel workflows without the overhead of building that infrastructure from scratch. As AI agents become more capable and more commonly deployed in production environments, tooling like this becomes critical. The bottleneck is shifting from "can the model do this?" to "can developers reliably manage and scale agentic systems?"

By open-sourcing CoPaw, Alibaba is making a direct play for developer mindshare in the agentic AI space - a market that's heating up fast. If you're building with AI agents, this is worth a look. And speaking of building fast - if you need to spin up a web presence for your AI project quickly, 60sec.site is an AI website builder that can have you live in under a minute. Worth bookmarking.

💬 What Do You Think?

Today's big theme is the collision between AI capability and accountability - Claude allegedly used in military strikes despite a ban, OpenAI signing Pentagon deals, and data centres expanding faster than environmental rules can keep up. So here's my question for you:

Do you think AI companies can actually control how their technology is used once it's in the hands of governments and the military - or is that ship already sailed? And does it change how you think about which AI tools you use?

Hit reply and let me know - I read every single response, and the conversations you start genuinely shape what we cover next.

That's your Daily Inference for Tuesday, March 3rd. If you found this useful, please share it with a colleague or friend who's trying to keep up with AI - it's the best way to support what we do. See you tomorrow. 👋

Stay current with all things AI at dailyinference.com.

Keep Reading