☀️ TRENDING AI NEWS

  • 🛠️ AutoAgent lets AI engineers skip the prompt-tuning loop entirely - the library rewrites its own agent harness overnight

  • ⚠️ Utah becomes only the second US state to let an AI system prescribe psychiatric drugs without a doctor

  • 🏢 OpenAI's C-suite is reshuffling again - Fidji Simo on medical leave, Brad Lightcap moves to 'special projects'

  • 🎵 A folk musician discovered AI-faked songs on her Spotify profile - and the copyright system has no good answer for her

If you've been spending evenings manually tweaking system prompts and rerunning agent benchmarks, today's newsletter might genuinely change your workflow. We've also got a story that will make you deeply uncomfortable about where AI is being given clinical authority - and one that shows just how broken the music copyright system is when AI enters the picture. Let's get into it.

🤓 AI Trivia

Which US state was the FIRST to allow an AI system to handle clinical prescribing authority before Utah's new psychiatric drug pilot?

  • 🏥 California

  • 🏥 Texas

  • 🏥 Utah (it was always Utah)

  • 🏥 Utah is actually the second - but which state was first?

The answer is hiding near the bottom of today's newsletter... keep scrolling. 👇

🛠️ AutoAgent: The Library That Fires Its Own Prompt Engineer

Every AI engineer has lived the same loop: write a system prompt, run your agent against a benchmark, read the failure traces, tweak the prompt, add a tool, repeat. It's the most unglamorous part of building agentic systems - and it eats weeks.

AutoAgent is a new open-source library that flips this on its head. Instead of you tuning the agent, the agent tunes itself. It analyzes its own failure traces overnight, rewrites its system prompt, adjusts its tool selection, and re-evaluates - all without you touching a line of code.

Grunt Work, Automated

The practical implication here is significant for anyone building production AI agents. Iteration cycles that used to take days of manual effort can now run while you sleep. The library is fully open-source, so you can inspect exactly what changes it's making to your harness - which matters a lot if you're deploying in any sensitive environment.

This is one of those tools that sounds almost too convenient - worth digging into the mechanics before you trust it on anything critical. But the concept is solid and the timing is right as agentic systems move from demos to real deployments.

⚠️ Utah Just Let an AI Chatbot Write Psychiatric Prescriptions

This one warrants a slow read. Utah has launched a one-year pilot that allows an AI system to prescribe and refill psychiatric drugs - without a doctor signing off. It's only the second time any US state has handed this kind of clinical authority to an AI system.

State officials argue it could reduce costs and help close mental health care gaps - which are genuinely severe in many parts of the country. But physicians are raising serious red flags: the system is described as opaque, and there are real questions about whether it will actually reach patients who lack access, or simply serve those who already have options.

Where Clinical Accountability Goes When There's No Doctor

The healthcare AI space has been moving fast, but prescribing psychiatric medication sits in a different risk category than scheduling appointments or summarizing notes. The drugs involved - used to treat conditions like depression, anxiety, and bipolar disorder - require nuanced dosing decisions and ongoing monitoring.

If you're building in medical AI or following AI regulation, this pilot is one to watch closely. It will either become a model for other states or a cautionary tale - probably within the year.

🎵 A Folk Singer Found AI Fakes on Her Spotify - The Copyright System Had Nothing for Her

In January, folk artist Murphy Campbell opened her Spotify profile and found songs she'd never uploaded. They were her recordings - but with altered vocals. Someone had pulled her music, run it through an AI voice tool, and distributed it back under her name.

She quickly realized she was dealing with two separate problems: AI-generated fakes, and a copyright system that wasn't built for any of this. Existing law struggles to handle AI-cloned vocals, and the takedown processes platforms offer are slow and often ineffective.

When the Platform Is Also the Problem

Campbell's case has become something of a flashpoint in the broader debate about AI music fraud and creative rights. Independent artists with smaller followings are disproportionately vulnerable - they don't have label lawyers or the audience power to surface the problem quickly.

The music industry fought hard to get the AI training data debate into public view. But this is a different problem - it's about what happens after a model is trained, when anyone can generate a convincing vocal clone and push it to streaming platforms at scale.

🏢 OpenAI's Leadership Shuffle Keeps Going

Another week, another round of OpenAI executive changes. Fidji Simo - CEO of AGI deployment - is stepping away on medical leave for several weeks due to a neuroimmune condition. While she's out, OpenAI president Greg Brockman is stepping in to cover her responsibilities.

Separately, COO Brad Lightcap is moving into a new role focused on 'special projects' - a title that traditionally signals either a lateral shift or something genuinely strategic that isn't ready to be named publicly. CMO Kate Rouch is also stepping away, citing cancer recovery, with plans to return when her health allows.

Three Executives, One Week, Lots of Questions

It's worth noting the broader context: OpenAI closed a $122 billion funding round just last week at an $852 billion valuation. That level of capital influx typically comes with intense pressure to restructure and scale rapidly. Leadership changes at this pace aren't necessarily alarming - but they're worth tracking as the company navigates its shift toward a more commercial structure.

🏢 Anthropic Is Getting Political - and Spending Money to Prove It

With midterms approaching, Anthropic has launched a new Political Action Committee (PAC) aimed at backing candidates who align with the company's AI policy agenda. This is a significant step - it signals that Anthropic isn't content to just publish research and lobby quietly. It wants to shape elections.

The timing is interesting. Anthropic is currently the hottest trade in AI private markets according to Rainmaker Securities president Glen Anderson, with secondary market activity for private shares at record highs. The company is flush with capital - and now it's deploying some of that into political influence.

When AI Labs Start Playing Election Season

OpenAI and Google have both built out substantial policy teams, but a dedicated PAC is a different kind of commitment. It puts Anthropic firmly in the category of political actor, not just technology company. Given how much AI regulation is in flux right now - especially with California's independent standards push still in play - having elected allies matters a lot.

Speaking of building fast - if you need a site up quickly for a project or product launch, 60sec.site is an AI website builder that has you live in under a minute. Worth bookmarking.

🌎 Trivia Reveal

The answer: Utah was actually the second state to grant clinical prescribing authority to an AI system - the article notes this is "only the second time" it's happened in the country. The Verge doesn't name the first state explicitly, but Utah isn't blazing a completely new trail here. Still, psychiatry is a significantly higher-stakes domain than whatever came before, which makes this pilot worth watching regardless.

💬 Quick Question

The Utah psychiatric prescribing story sits in genuinely uncomfortable territory. Where do you draw your personal line? Is there a category of decision where you'd never want AI to have final authority - no matter how good the technology gets? Hit reply and tell me. I read every response and they genuinely shape what we cover.

That's all for today - back tomorrow with more from the fastest-moving space in tech. If you want to dig into anything we've covered this week, the full archive is at dailyinference.com.

Keep Reading