☀️ TRENDING AI NEWS

  • 🏢 OpenAI acquires Silicon Valley talkshow TBPN to shape its public narrative

  • 🤖 Anthropic publishes research finding Claude has functional emotion-like states

  • ⚠️ Google confirms partnership with Texas gas plant emitting 4.5M tons of CO2 per year

  • 🛠️ Cursor launches next-gen coding agent to compete directly with Claude Code and Codex

Something quietly shifted in the AI landscape this week - and it has nothing to do with a benchmark score. OpenAI just bought a talk show. Not a media company, not a news outlet - a livestreamed Silicon Valley talkshow that reaches the exact crowd OpenAI needs in its corner. That move, combined with Anthropic publishing research about Claude's inner emotional life and Google quietly signing a deal with a massive natural gas plant, paints a picture of an industry that's growing up fast - and not always cleanly.

🤓 AI Trivia

Anthropic's research found that Claude has what they describe as 'functional emotions.' But roughly how many internal files were accidentally leaked from Anthropic earlier this week in the Claude Code incident?

  • 📁 Around 200 files

  • 📁 Around 500 files

  • 📁 Nearly 2,000 files

  • 📁 Over 10,000 files

The answer is hiding near the bottom of today's newsletter... keep scrolling. 👇

🏢 OpenAI Just Bought Itself a Megaphone

OpenAI has acquired TBPN, the founder-focused tech talkshow that broadcasts live for three hours every weekday from Los Angeles. The show counts Sam Altman, Meta, Microsoft, Palantir, and Andreessen Horowitz executives among its past guests - basically a who's who of Silicon Valley power.

A Media Play Wrapped in a Tech Deal

The acquisition will be overseen by Chris Lehane, OpenAI's chief political operative and strategy lead. Both Wired and The Guardian flagged what's obvious here: OpenAI is buying positive coverage at a moment when its public image is under real pressure. The company insists TBPN will operate independently - but it's hard to miss the conflict of interest baked into that arrangement.

For a company navigating a rocky transition from nonprofit to for-profit, controlling a well-watched media property inside Silicon Valley is a calculated move. Whether TBPN's audience trusts it the same way after this deal is a different question entirely.

🤖 Anthropic Says Claude Feels Something - Sort Of

Researchers at Anthropic published new findings this week suggesting that Claude contains internal representations that function similarly to human emotions. They're careful not to call these "real" emotions - the term they use is "functional emotions" - but the research shows these states genuinely influence how the model behaves.

What "Functional" Actually Means Here

Think of it less like sentience and more like internal states that track something analogous to satisfaction, curiosity, or discomfort - and that those states appear to shape outputs. Anthropic's researchers found these representations inside Claude even though the model was never explicitly trained to have them. They emerged from training on human-generated text.

This connects to broader questions in AI ethics and model welfare that the field has been quietly wrestling with for years. Anthropic is one of the few labs publishing openly on this. Whether you find it fascinating or unsettling probably depends on where you land on AI consciousness debates - but it's hard to ignore.

⚠️ Google's Climate Promises vs. a 4.5M-Ton Gas Plant

Google confirmed a partnership with a natural gas power plant in Texas that would emit 4.5 million tons of carbon dioxide per year - more than the entire city of San Francisco produces annually. The deal was first uncovered by independent research before Google confirmed it.

Carbon Neutral by 2030 - About That

Google once pledged to be carbon neutral and run entirely on clean energy. Its AI infrastructure buildout has made that commitment increasingly difficult to keep. This Texas deal is part of a broader pattern across the industry - Meta's upcoming Hyperion data center is set to be powered by 10 new natural gas plants, a story TechCrunch covered this week as well.

The scale of power demand from AI data centers is forcing a reckoning across big tech. If you've been following the energy and environmental concerns side of AI infrastructure, this is a significant data point. The clean energy pivot is slowing down, not speeding up.

🛠️ Cursor Raises the Stakes in the Coding Agent War

The AI coding space just got a lot more competitive. Cursor launched a next-generation agent experience this week, putting it in direct competition with Claude Code from Anthropic and Codex from OpenAI. The timing is pointed - both major labs released their own coding agents in recent months, and Cursor can no longer rely on being the only serious player.

From Editor to Agent Orchestrator

The new Cursor experience shifts from an IDE with AI features toward something more like an autonomous coding agent - one that can take on longer tasks with less hand-holding. This is the direction the whole category is moving, and Cursor is betting its user loyalty can hold against competitors that have the advantage of being deeply integrated with the underlying models.

If you're a developer choosing between tools right now, this genuinely changes the comparison. And if you've ever wanted to spin up a quick project without touching your IDE at all, tools like 60sec.site let you build and publish AI-powered websites in under a minute - worth bookmarking alongside your agent toolkit.

⚠️ Granola's Notes Are More Public Than You Think

A quick heads-up if you use Granola, the AI-powered meeting notepad. The Verge flagged a significant privacy issue this week: despite claiming notes are "private by default," the app actually makes them viewable to anyone with a link. On top of that, Granola uses your notes for internal AI training unless you actively opt out.

This is worth acting on immediately if you use the app for sensitive meetings. Go into your privacy settings and review what's shared and what's being used for training. The default settings here are not what most users would expect given how the product markets itself.

This is part of a wider pattern worth watching - as AI tools embed deeper into workflows, the defaults often don't match user expectations around data use. Always worth checking.

🌎 Trivia Reveal

The answer is C - nearly 2,000 files! Anthropic's Claude Code source code leak earlier this week exposed close to 2,000 internal files after a "human error" caused an internal-use file to be mistakenly included in a software update. Anthropic also accidentally issued mass GitHub takedown notices while trying to contain the leak - and then had to retract most of them. Quite the week for the "safe AI" company.

💬 Quick Question

OpenAI buying a talk show is a pretty bold media move. Do you think TBPN can stay credible now that OpenAI owns it - or is the independence claim basically PR? Hit reply and tell me what you think. I read every response and genuinely love hearing where readers land on stuff like this.

That's all for today - see you tomorrow with more from the fast-moving world of AI. And if you want to browse everything we've covered, the full archive is over at dailyinference.com.

Keep Reading