🤖 Daily Inference

Wednesday, March 4, 2026

Happy Wednesday! The AI world has been anything but quiet this week - and today we're diving into a story that's reshaping how we think about AI, ethics, and war. We've got OpenAI stepping into Anthropic's Pentagon-shaped shoes, Claude rocketing to the top of the App Store amid a military controversy, a Supreme Court ruling that could define AI creativity forever, and a sobering look at what AI-powered warfare actually looks like in practice. Plus, Cursor just quietly crossed a milestone that should make every SaaS founder pay attention. Let's get into it.

🏢 OpenAI Steps Into Anthropic's Pentagon Void

When Anthropic refused to let the U.S. Department of Defense use Claude for certain military applications, someone was always going to fill that gap. That someone turned out to be OpenAI. In a move that's drawing both praise and sharp criticism, OpenAI revealed more details about its agreement with the Pentagon - a deal that represents a significant departure from the company's earlier stated commitments around weapons development and autonomous warfare.

The arrangement puts OpenAI squarely in the center of a debate that the broader AI industry has been circling for months: how should AI companies work with governments, particularly the military? Tech workers are already pushing back - a coalition of employees has urged both Congress and the DoD to reconsider its approach to labeling Anthropic as a supply-chain risk, a designation that effectively pressured the company into a corner. OpenAI, meanwhile, appears to have found what MIT Technology Review describes as a "compromise" - though critics argue it's more of a capitulation. For more context on OpenAI's evolving government relationships, the implications run deep.

What's clear is that there's no good playbook here. As one TechCrunch analysis put it plainly: "No one has a good plan for how AI companies should work with the government." The Pentagon wants cutting-edge AI capabilities; AI companies want lucrative government contracts; and ethicists, researchers, and workers want safeguards that may be fundamentally incompatible with military use. This triangle of competing interests isn't going away anytime soon - and OpenAI just picked a side.

⚠️ AI-Powered Bombing: Faster Than the Speed of Thought

If the Pentagon debate feels abstract, a new report from The Guardian makes it viscerally concrete. The recent strikes on Iran have been described as heralding a new era of AI-powered warfare - one where targeting decisions happen faster than any human could consciously process. The phrase used is striking: bombing "quicker than the speed of thought." This isn't science fiction. It's reportedly already happening.

The implications are staggering. When AI systems are making or accelerating lethal targeting decisions at machine speed, the traditional frameworks for military accountability, international law, and human oversight break down. Who is responsible when an AI-assisted strike hits the wrong target? How do you audit a decision that was made in milliseconds? These aren't hypothetical questions anymore - they're operational realities that military ethicists and policymakers are scrambling to address.

Guardian commentator Chris Stokel-Walker argues that Trump's embrace of AI in warfare represents a genuinely dangerous turning point - not just because of what AI can do, but because of how quickly the ethical guardrails are being dismantled in the rush to deploy it. The U.S. military reportedly used Claude in some capacity during the Iran strikes despite Trump's stated ban on the tool - underscoring just how messy the real-world picture is. For ongoing coverage of AI in military and geopolitical contexts, the situation is evolving rapidly.

🚀 Claude Hits #1 in the App Store - and Gets a Memory Upgrade

Here's the twist nobody saw coming: Anthropic refusing the Pentagon deal may have been the best marketing move in the company's history. Following the public dispute over military AI, Anthropic's Claude surged to the number one spot in the App Store - a remarkable vote of confidence from users who appear to be rewarding the company for its ethical stance. ChatGPT uninstalls, meanwhile, reportedly surged by 295% following OpenAI's DoD deal announcement.

Anthropic isn't just riding the wave passively - the company also announced meaningful upgrades to Claude's memory capabilities, a feature designed to attract users switching from other AI assistants. The upgrade allows Claude to remember context across conversations, a quality-of-life improvement that brings it closer to the persistent, personalized experience many users have been asking for. The timing is deliberate: Anthropic knows people are looking for alternatives, and it's making Claude easier to switch to and harder to leave.

TechCrunch even published a guide on how to make the switch from ChatGPT to Claude - a sign of just how significant this moment is for the competitive landscape. The fact that a principled business decision generated this much organic user growth will be studied in business schools. Whether Anthropic can hold onto these new users long-term will depend on whether the product continues to improve. But for now, the ethics-as-product-strategy play is working. We've been covering Anthropic's trajectory closely - this is a pivotal moment.

⚖️ Supreme Court Settles It: AI Art Cannot Be Copyrighted

In a landmark moment for AI copyright law, the U.S. Supreme Court has declined to review a lower court ruling that AI-generated art cannot be copyrighted. By refusing to take up the case, the Supreme Court has effectively let stand the principle that copyright protection requires human authorship - and that an AI system alone cannot satisfy that requirement.

The implications ripple across every creative industry. For artists, photographers, writers, and musicians, this is a partial victory: it means purely AI-generated works enter the public domain immediately, unable to be owned or monetized as intellectual property by whoever ran the prompt. But the ruling also raises thorny questions about hybrid works - where a human directs, edits, and curates AI-generated content. How much human creative input is "enough" to qualify for copyright? That line remains blurry and will almost certainly be litigated further.

For the AI industry, the ruling creates a complex dynamic. Companies building AI-generated content tools now face a world where their outputs have less commercial protection - which could slow enterprise adoption in some sectors while accelerating it in others where open, uncopyrightable content is actually desirable. This is one of the most consequential legal precedents in AI's short history, and the creative industries will be feeling its effects for years.

⚡ Cursor Crosses $2B in Annualized Revenue - The AI Coding Boom Is Real

While the military AI debate dominates headlines, a quieter but equally significant milestone deserves attention: Cursor, the AI-powered coding assistant, has reportedly surpassed $2 billion in annualized revenue. That number is extraordinary for a product that most people outside the developer community hadn't heard of two years ago - and it signals just how deeply AI coding tools have penetrated professional software development.

Cursor's rise is part of a broader pattern: AI tools that embed deeply into professional workflows - where the value is immediately measurable and the switching costs are high - are generating remarkable revenue. Developers aren't just experimenting with Cursor; they're restructuring how they work around it. This is the kind of sticky, habitual adoption that every AI company dreams of. It also puts enormous pressure on GitHub Copilot and every other player in the space to match both the feature set and the developer experience.

For anyone building AI-powered tools, Cursor's trajectory offers a masterclass in product-led growth. Speaking of building with AI - if you're looking to spin up a website quickly, 60sec.site is an AI website builder that can have you live in under a minute. Worth a look if you need to move fast.

🛠️ Alibaba Releases Qwen 3.5 Small Models for On-Device AI

On the open-source front, Alibaba has released its Qwen 3.5 Small model family - a range of models spanning 0.8B to 9B parameters specifically designed for on-device applications. This is a significant move in the race to bring capable AI models to edge devices like smartphones, laptops, and embedded systems - where cloud connectivity can't always be assumed and latency requirements are strict.

The on-device AI space is heating up rapidly, and Alibaba's entry with a full family of small models gives developers a range of options depending on their hardware constraints. Smaller models like the 0.8B variant are designed for the most resource-limited environments, while the 9B model offers substantially more capability for devices with room to spare. The key challenge for on-device models is always the tradeoff between size and capability - Qwen 3.5 Small appears to be Alibaba's answer to that puzzle.

This release comes alongside Alibaba's OpenSandbox, a new API designed to give developers a unified, secure, and scalable environment for running autonomous AI agents. Together, these releases position Alibaba as a serious contender in the developer tools space - not just for Chinese developers, but globally. For more on the latest AI research and model releases, keep an eye on what's coming out of the Alibaba ecosystem. Stay updated at dailyinference.com for daily coverage.

💬 What Do You Think?

This week's big theme is AI and the military - and it raises a question I keep coming back to: Should AI companies be allowed to decide unilaterally which governments and military applications they'll support, or should that be determined by law? Anthropic said no to the Pentagon and got rewarded in the App Store. OpenAI said yes and got a lucrative contract. Neither outcome was dictated by regulation - it was purely a business choice. Is that okay? Hit reply and tell me what you think - I genuinely read every response.

Thanks for reading today's Daily Inference. If you found this useful, forward it to a friend who follows AI - the more the merrier. See you tomorrow with more from the front lines of artificial intelligence.

Keep Reading