☀️ TRENDING AI NEWS
🤖 Mira Murati's Thinking Machines unveils 'interaction models' that process input and respond simultaneously
🚨 Google stops the first confirmed zero-day exploit built with AI assistance
🏢 GM lays off hundreds of IT workers to hire AI-native talent
🛠️ OpenAI launches Daybreak, a security AI agent for proactive vulnerability detection
Picture this: you're mid-sentence and an AI is already formulating its response - not waiting for you to finish, not buffering, just genuinely present in the conversation the way another person would be. That's what Mira Murati is building. And separately, the first AI-engineered cyberattack just got neutralized before it could detonate. Both stories landed yesterday, and together they paint a pretty vivid picture of where this technology is heading.
🤓 AI Trivia
Before we dig in - a quick brain teaser. What is the term for a previously unknown software vulnerability that attackers can exploit before developers have a chance to fix it?
🔐 A. Shadow bug
🔐 B. Zero-day exploit
🔐 C. Phantom patch
🔐 D. Cold start vulnerability
The answer is near the bottom of today's newsletter... keep scrolling. 👇
🤖 Mira Murati's Thinking Machines Wants AI That Thinks While It Talks
Every AI model you've ever used follows the same basic script: you talk, it listens, it responds, you listen. It's a text chain masquerading as a conversation. Thinking Machines, the startup founded by former OpenAI CTO Mira Murati, announced yesterday it's working to break that pattern entirely.
Simultaneous Input and Output - Not as Simple as It Sounds
The concept is called an 'interaction model.' Instead of the classic listen-then-respond cycle, Thinking Machines is building a model that continuously takes in audio, video, and text while simultaneously generating a response - more like a real phone call than a chatbot window. The goal is collaboration that mirrors how humans naturally work together.
This is technically hard. Most current architectures are built around discrete turns - complete the input, then begin the output. Rewiring that at a fundamental level is a genuine engineering challenge, not just a UI tweak. If Thinking Machines pulls it off, the gap between talking to an AI and talking to a person gets a lot smaller.

🚨 Google Just Stopped the First AI-Built Zero-Day Exploit
This one is a milestone nobody wanted to reach. Google's Threat Intelligence Group (GTIG) confirmed yesterday that it spotted and neutralized a zero-day exploit that was developed with AI assistance - the first time this has been publicly confirmed. The vulnerability targeted an open-source, web-based system administration tool and was apparently being staged for a mass exploitation event.
From Script Kiddie to Industrial-Scale Threat
The broader context matters here. According to a separate Guardian report drawing on GTIG data, AI-powered hacking has gone from a niche concern to an industrial-scale threat in just three months. Criminal groups and state-linked actors are using commercial AI models to refine, personalize, and scale attacks at a speed human teams simply can't match.
The attack was stopped before it caused damage - but the fact that it happened at all changes the conversation. If you're following AI security developments, this is the kind of proof-of-concept moment that tends to accelerate both offense and defense investment simultaneously.

🛠️ OpenAI Launches Daybreak - Its Answer to AI-Powered Security Threats
The timing here is hard to ignore. On the same day Google confirmed an AI-built exploit was stopped, OpenAI announced Daybreak - a new security initiative that flips the script and uses AI to find vulnerabilities before attackers do.
Codex as the Threat Hunter
Daybreak is built on the Codex Security AI agent that launched back in March. The workflow: it creates a threat model based on an organization's actual codebase, maps likely attack paths, validates probable vulnerabilities, and then automates detection of the highest-risk ones. It's proactive scanning rather than reactive patching.
If you're using AI coding tools to ship faster, this is the logical next layer - automated security review that keeps pace with AI-accelerated development. It's also a direct play in the same space that Anthropic's Mythos security model was targeting, which we covered earlier this week.

🏢 GM Is Replacing Hundreds of IT Workers with AI-Native Talent
General Motors quietly made a significant workforce move yesterday - laying off hundreds of IT employees and announcing it's actively hiring replacements with stronger AI skills. This is part of a broader corporate AI talent reset that's accelerating across major companies, but GM's scale and specificity make it worth paying attention to.
The New Job Titles GM Is Actually Hiring For
GM is targeting roles in AI-native development, data engineering and analytics, cloud-based engineering, and agent and model development. They're also hiring for prompt engineering and new AI workflow design. In other words, this isn't vague 'digital transformation' talk - they're naming specific skill sets and building dedicated teams around them.
We covered Cloudflare's 1,100 AI-related job cuts earlier this week. GM's move follows a similar logic but with a twist: it's not just cutting, it's explicitly replacing with AI-skilled workers. The job market pressure here isn't just 'AI replaces humans' - it's 'AI replaces humans who don't know AI.'
Quick side note: if you're a developer looking to stand out right now, having a fast, polished web presence matters. 60sec.site lets you spin up a professional AI-built website in under a minute - worth a look if your portfolio page is still living in 2019.

⚡ Australia Moves to Force Datacentres to Fund Renewable Energy
Australia's state and federal energy ministers reached agreement yesterday on a framework that would require power-hungry datacentres to invest in enough new renewable energy to fully offset their consumption. All states backed the proposal except Queensland. The policy is a direct response to surging AI-driven compute demand straining the grid.
Additionality - The Key Word in This Debate
The critical detail is the word 'new.' Ministers are insisting investments must be in additional solar and wind capacity - not just buying existing renewable credits. That's a much higher bar than what most voluntary corporate sustainability pledges involve, and it's a pointed response to the kind of emissions underreporting we covered from Google's UK operations earlier this week.
If this framework becomes enforceable policy, it could set a template that other countries adopt. The energy and environmental costs of AI infrastructure are no longer abstract concerns - governments are starting to attach real conditions to them.
🌎 Trivia Reveal
The answer is B - a Zero-day exploit! The name refers to the fact that developers have had 'zero days' to fix the vulnerability before it can be used against them. Now that AI can help discover and build these exploits faster than ever, the race between attackers and defenders just got a serious speed boost on both sides.
💬 Quick Question
The GM story got me thinking - are you actively upskilling in AI tools right now, or does it feel more like background noise you haven't acted on yet? Hit reply and let me know where you're at - I read every response and I'm genuinely curious about what's actually changing in people's day-to-day work.
That's all for today - see you tomorrow with more. If you found this useful, the best thing you can do is forward it to one person who'd appreciate it. And as always, the full archive is at dailyinference.com.