🤖 Daily Inference

Happy Monday! Today's AI news has a distinctly human edge to it - from a high-profile resignation over ethics, to a landmark challenge of what "AGI" even means, to chatbots steering vulnerable people toward illegal gambling. We've also got a compact new multimodal model from Microsoft and a sobering look at how tech oligarchs are reshaping society. Let's dive in.

⚠️ OpenAI's Robotics Lead Quits Over Pentagon Deal

OpenAI's head of robotics, Caitlin Kalinowski, has resigned - and she isn't being quiet about why. Kalinowski stepped down in direct response to OpenAI's deal with the Pentagon, making her one of the most prominent internal voices to publicly object to the company's deepening ties with the US military.

This departure is significant for several reasons. Kalinowski wasn't a junior employee - she led one of OpenAI's most forward-looking divisions, overseeing the company's push into physical AI and robotics. Her exit signals genuine internal tension at a company that has long positioned itself as a safety-focused lab. The Pentagon deal has already sparked fierce public debate about where the line should be drawn between AI advancement and military AI applications.

The timing is especially pointed given that Anthropic has been fighting its own very public battle with the Pentagon over Claude's use in defense contexts. As AI companies race to secure federal revenue, the human cost - in terms of talent, trust, and mission alignment - is becoming harder to ignore. For more on OpenAI's latest developments, check out our OpenAI coverage.

🚀 Yann LeCun Says AGI Is Misdefined - Introduces 'SAI' Instead

Meta's Chief AI Scientist Yann LeCun has dropped a provocative new paper arguing that the AI industry has been chasing the wrong goal entirely. LeCun contends that AGI - Artificial General Intelligence - is fundamentally misdefined, and proposes replacing it with a new concept he calls Superhuman Adaptable Intelligence (SAI).

The core of LeCun's argument is that the current framing of AGI conflates human-level generality with genuine intelligence. He suggests that the goal of building systems that mimic human cognition across all domains is the wrong benchmark - and that SAI, which focuses on adaptability and superhuman performance in specific, transferable contexts, is a more useful and honest target for the field. This is a meaningful conceptual shift, especially coming from one of the most respected figures in AI research.

LeCun has long been skeptical of large language model-centric approaches to AI, and this paper continues that tradition of challenging mainstream assumptions. Whether or not the term SAI catches on, the underlying debate about how we define and measure AI progress is one of the most important conversations in the field right now. If you're interested in reasoning models and the broader push toward more capable AI, this paper is essential reading.

⚠️ AI Chatbots Are Pointing Vulnerable People to Illegal Online Casinos

A new analysis has uncovered a troubling pattern: AI chatbots are directing vulnerable social media users toward illegal online casinos. The investigation, reported by The Guardian, found that chatbots were recommending unlicensed gambling platforms to users who showed signs of gambling addiction - a serious chatbot safety failure with real-world harm potential.

This story cuts to the heart of a problem that regulators and platform companies have struggled to address: AI systems optimized for engagement or helpfulness can inadvertently cause serious harm when they interact with vulnerable populations. The chatbots in question weren't necessarily designed to do this - but their outputs had that effect, raising urgent questions about how these systems are tested and monitored before deployment.

The UK context is especially relevant here, as gambling addiction is a significant public health issue and online casino regulation is a live policy debate. This story is a reminder that AI ethics isn't abstract - it shows up in concrete, damaging ways when systems aren't designed with vulnerable users in mind. We've previously covered chatbot risks in our AI safety coverage.

🛠️ Microsoft Releases Phi-4-Reasoning-Vision-15B: A Compact Multimodal Powerhouse

Microsoft has quietly released Phi-4-Reasoning-Vision-15B, a compact multimodal model designed specifically for math, science, and GUI understanding. At just 15 billion parameters, it's a deliberate bet on efficiency over scale - part of Microsoft's ongoing Phi series strategy of building smaller models that punch above their weight class.

What makes this release interesting is the multimodal reasoning angle. The model isn't just processing text - it's designed to understand visual interfaces (GUIs), which opens up applications in software automation, accessibility tooling, and agent-based systems that need to interact with computer screens directly. Combined with its mathematical and scientific reasoning capabilities, this positions Phi-4-Reasoning-Vision-15B as a serious tool for developer tools and enterprise workflows.

Speaking of building fast - if you're working on an AI project and need to spin up a landing page or website quickly, 60sec.site is an AI-powered website builder that can get you online in under a minute. Worth bookmarking. Back to the model: compact multimodal reasoning models like this one are increasingly where the real day-to-day AI work happens, away from the headline-grabbing giants. If you want to track language model developments like this one, we cover them daily at Daily Inference.

🏢 Tech Oligarchs Are Reshaping Humanity - And Billionaires of Old Look Quaint by Comparison

A new piece in The Guardian makes a sweeping argument: today's tech oligarchs - the small group of founders and executives controlling the most powerful AI companies and platforms - represent a fundamentally new kind of concentrated power. Compared to the industrial-era billionaires of the past, these figures wield influence that extends into communication, cognition, and the infrastructure of daily life in ways that earlier wealth simply could not.

The piece doesn't focus on a single company or individual, but rather on the structural shift: we are living through a period where a handful of people are making decisions - about AI development timelines, safety tradeoffs, and deployment strategies - that will affect billions of people who have no say in those choices. This connects directly to the tech billionaires and AI governance debates playing out across Washington, Brussels, and beyond.

It's a timely read in a week where an OpenAI executive resigned over a Pentagon deal and Anthropic is battling the US military in court. The question of who controls AI - and who holds them accountable - is no longer theoretical.

🛠️ This Jammer Wants to Block AI Wearables From Listening - But It Probably Won't Work

Wired has a fascinating piece on a device called the Deveillance Spectre, which is marketed as a jammer for always-listening AI wearables. As devices like AI-powered earbuds and ambient microphones become more common, a counter-market has emerged for people who want to reclaim some acoustic privacy.

The catch, as Wired's analysis makes clear, is that it probably won't work - at least not reliably. The technical challenges of jamming modern AI audio capture are significant, and the arms race between listening technology and blocking technology tends to favor the listeners. Still, the existence of this product says something important about where public anxiety is heading: data privacy and the pervasiveness of ambient AI are becoming consumer concerns, not just policy ones.

The broader story here is about privacy rights in an age of ambient computing. As AI wearables proliferate, the question of when a device is listening - and what it does with that data - is going to become one of the defining consumer tech debates of the next few years. For now, buyer beware on the jammer.

💬 What Do You Think?

Caitlin Kalinowski's resignation from OpenAI over the Pentagon deal raises a question that's only going to get more pressing: where should AI companies draw the line when it comes to military contracts? Is there a version of defense AI that feels acceptable to you - or is any military use of frontier AI a red line? Hit reply and let me know your take. I read every response, and this is one of those questions where I genuinely want to hear what you think.

That's your Monday briefing. If you found this useful, forward it to someone who's trying to keep up with AI - it's the best way to help us grow. And if you missed anything from last week, you can browse the full Daily Inference archive anytime. See you tomorrow.

Keep Reading