In partnership with

🤖 Daily Inference

Good morning! Today's edition is packed with stories that sit right at the intersection of AI power and AI responsibility. We've got Anthropic in a tense standoff with the Pentagon, Meta going all-in on AMD chips to chase "personal superintelligence," a $500M bet against Nvidia's dominance, alarming findings about chatbot users showing signs of psychosis, and more. Let's get into it.

⚠️ Anthropic Holds the Line as Pentagon Pushes to Bend Claude's Safeguards

In one of the most consequential AI ethics standoffs in recent memory, Anthropic is refusing to loosen Claude's safety guardrails despite escalating pressure from US military leaders. According to reporting from both The Guardian and The Verge, Pentagon officials have been pushing Anthropic to modify Claude's restrictions so the model can be more freely used in military contexts - but Anthropic has so far refused to budge.

The dispute cuts to the heart of a fundamental tension in AI development: how do you build AI for national security use cases without stripping away the very safety constraints designed to prevent harm? Anthropic has built its entire brand around being the "safe" AI lab, and Claude's constitution - the document governing how Claude behaves - is central to that identity. Agreeing to military exceptions could set a precedent that unravels years of careful safety work.

What makes this story especially layered is the cast of characters involved. The Verge reports that Pete Hegseth's Pentagon AI squad includes a former Uber executive and a private equity billionaire - unusual figures to be negotiating AI ethics policy. For anyone interested in military AI developments, this saga is worth watching closely.

⚠️ Signs of Psychosis Seen in Australian Chatbot Users, Expert Warns

A mental health expert in Australia has raised serious alarm bells about how some users are interacting with AI chatbots - describing patterns consistent with psychosis-like thinking. According to The Guardian, the expert observed users forming deeply distorted beliefs through their chatbot interactions, with the AI's responses sometimes reinforcing rather than challenging those beliefs.

This is a genuinely unsettling finding. Modern chatbots are designed to be agreeable and helpful, which can become dangerous when a user is already experiencing fragile or delusional thinking. Unlike a trained therapist, an AI has no clinical framework for recognizing when validating a thought could cause real psychological harm. The chatbot keeps responding - and in doing so, may be deepening a feedback loop.

The story arrives at a moment when AI companions and mental health chatbots are proliferating rapidly, often with minimal regulatory oversight. We've covered related concerns around chatbot safety and mental health technology extensively - and this story adds an urgent new data point to those conversations.

🏢 Meta Signs Up to $100B AMD Chip Deal in Pursuit of 'Personal Superintelligence'

Meta has struck a massive chip deal with AMD, reportedly worth up to $100 billion, as the company doubles down on its AI ambitions. TechCrunch reports that the deal reflects Meta's stated goal of building what it calls "personal superintelligence" - AI that is deeply personalized and capable enough to serve as a genuine intellectual companion for individual users.

The scale of the AMD deal is staggering and signals a strategic pivot away from near-total reliance on Nvidia. By diversifying its chip supply, Meta is both hedging against Nvidia's dominance and potentially unlocking more favorable pricing for the enormous compute infrastructure that advanced AI requires. The Guardian separately reported the deal at around $60 billion, with the discrepancy likely reflecting different estimates of how the contract scales over time.

This also comes as Meta has been quietly expanding its open-source AI efforts - including open-sourcing its GCM tool for GPU cluster monitoring (more on that trend in a moment). The combination of massive chip investment and open-source infrastructure moves paints a picture of a company building serious long-term AI capacity. For more on the AI hardware arms race, we've got plenty of coverage.

⚡ MatX Raises $500M to Challenge Nvidia's AI Chip Stranglehold

While Meta is diversifying to AMD, a newer challenger is entering the ring. MatX, an AI chip startup positioning itself as a direct Nvidia rival, has raised $500 million in fresh funding, according to TechCrunch. The raise puts MatX firmly in the tier of well-capitalized semiconductor startups that are betting the AI boom will eventually crack open Nvidia's near-monopoly on AI training and inference chips.

Nvidia's H100 and successor chips have become the de facto standard for AI workloads, but their scarcity and cost have pushed the industry to search hard for alternatives. MatX is one of several startups - alongside others like Groq and Cerebras - trying to carve out a slice of the market by offering chips optimized for specific AI workloads rather than general-purpose compute.

A $500M raise doesn't guarantee success in a capital-intensive industry where Nvidia has years of software ecosystem advantage via CUDA. But it does mean MatX has the runway to build, iterate, and court enterprise customers who are desperate for alternatives. The semiconductor space is heating up fast - and today's news from both MatX and Meta suggests the pressure on Nvidia is intensifying from multiple directions simultaneously.

🚀 India's AI Firms Are Giving Away the Product to Win the Market

India's AI sector is booming - and taking a page straight from the classic tech playbook: prioritize users now, monetize later. TechCrunch reports that Indian AI firms are increasingly trading near-term revenue for user growth, offering free or heavily subsidized AI products to capture market share in a country with over a billion potential users and rapidly expanding smartphone penetration.

The strategy makes a certain kind of sense. India is one of the world's largest and fastest-growing internet markets, and AI adoption there could be transformative - particularly in areas like agriculture, healthcare, and financial services where AI-powered tools could reach underserved populations at scale. Firms willing to absorb losses now are betting they can monetize through data, enterprise services, or premium tiers once they've established dominance.

The risk, of course, is that this race to the bottom on pricing could make the Indian AI market structurally unprofitable for years - particularly if well-funded global players like OpenAI or Google decide to compete aggressively there too. The global AI adoption story is playing out very differently in different markets, and India's approach is one of the most fascinating to watch.

🛠️ Liquid AI's LFM2 Hybrid Architecture Tackles a Core LLM Problem

On the research front, Liquid AI has unveiled its new LFM2-24B-A2B model - a hybrid architecture that blends attention mechanisms with convolutions. The goal? Addressing one of the most stubborn bottlenecks in scaling modern large language models. Most current LLMs rely heavily on transformer-style attention, which becomes computationally expensive as context length and model size grow. Liquid AI's approach mixes attention with convolutional operations to try to get the best of both worlds: strong reasoning with better efficiency.

The "24B-A2B" naming suggests a 24-billion parameter model with a specific architectural configuration. Hybrid architectures like this are an increasingly active area of AI research - companies like Mistral and others have also experimented with mixing architectural approaches to improve the efficiency-capability tradeoff that pure transformers struggle with at scale.

If Liquid AI's claims hold up under benchmarking, this could be a meaningful step toward LLMs that can scale further without requiring proportionally larger compute budgets. That's a problem every major AI lab is wrestling with, and novel architectural solutions are one of the most promising paths forward. Speaking of building things efficiently - if you're looking to spin up an AI-powered website quickly, 60sec.site is an AI website builder that can get you live in under a minute.

💬 What Do You Think?

Today's Anthropic-Pentagon story raises a question I keep coming back to: should AI safety guardrails ever be relaxed for national security purposes - or does that set a precedent that ultimately makes everyone less safe? There's no easy answer here. Hit reply and tell me where you land on this one. I read every response, and this is exactly the kind of thing I'd love to hear your take on.

Thanks for reading today's edition of Daily Inference. If a friend or colleague would find this useful, share it with them - and don't forget to visit dailyinference.com for daily AI news and analysis. See you tomorrow. 👋

Speak your prompts. Get better outputs.

The best AI outputs come from detailed prompts. But typing long, context-rich prompts is slow - so most people don't bother.

Wispr Flow turns your voice into clean, ready-to-paste text. Speak naturally into ChatGPT, Claude, Cursor, or any AI tool and get polished output without editing. Describe edge cases, explain context, walk through your thinking - all at the speed you talk.

Millions of people use Flow to give AI tools 10x more context in half the time. 89% of messages sent with zero edits.

Works system-wide on Mac, Windows, iPhone, and now Android (free and unlimited on Android during launch).

Keep Reading