☀️ TRENDING AI NEWS
🤖 OpenAI is redirecting its entire research org toward building a fully automated AI researcher agent
🏢 Trump unveils 7-point AI framework that would block states from regulating AI
⚠️ Senior European journalist suspended after admitting AI hallucinated quotes he published
🛠️ NVIDIA releases Nemotron-Cascade 2 - an open 30B model with only 3B active parameters
Something quietly cracked open in the AI research world this week - and it wasn't a model release or a benchmark. It was a mission statement.
OpenAI has told its research teams to stop whatever they're doing and point everything at one goal: building an AI that can do science by itself. Fully automated. No human in the loop. Meanwhile, a senior journalist in Europe published AI-generated quotes he never verified, and the U.S. government quietly dropped a policy bomb that could reshape how AI gets regulated for years. A lot moved this week - let's get into it.
🤓 AI Trivia
NVIDIA's new Nemotron-Cascade 2 is a 30 billion parameter Mixture-of-Experts model - but how many of those parameters are actually "active" during any given inference run?
🔢 30 billion (all of them)
🔢 15 billion
🔢 3 billion
🔢 7 billion
The answer is hiding near the bottom of today's newsletter... keep scrolling. 👇
🤖 OpenAI Is Going All-In on a Fully Automated Research Agent
This is a big strategic shift. OpenAI has refocused its research organization around a single grand challenge: building a fully automated AI researcher - an agent-based system capable of independently tackling large, complex scientific problems from start to finish.
What 'Fully Automated' Actually Means Here
This isn't a smarter chatbot or a coding assistant. The goal is a system that can autonomously identify research questions, design experiments, interpret results, and iterate - the way a human scientist would, but without needing to stop and ask for directions. OpenAI is throwing significant engineering resources at this, per MIT Technology Review's reporting.
If it works even partially, the implications for drug discovery, materials science, and mathematics are enormous. The question is whether autonomous research agents can handle the messy, often ambiguous reality of actual scientific work - or whether they'll confidently produce very polished nonsense.
If you've been following our AI research coverage, you know this has been building for a while. But making it the org's primary mission is new.
🏛️ Trump's AI Framework Targets State Regulation - and Means Business
The Trump administration dropped a 7-point AI regulation blueprint on Friday that sends a clear signal: Washington wants one set of AI rules, and it doesn't want states writing their own.
Federal Preemption as the Core Move
The framework advises Congress to bar states from creating their own AI regulations that conflict with a national strategy - framing it as necessary to "achieve global AI dominance." The plan calls for minimal federal AI rules beyond child safety protections, and even there, it shifts significant responsibility toward parents rather than companies.
For companies, this is good news in the short term - fewer compliance headaches across 50 different state regimes. For AI safety advocates, it's the opposite: it removes one of the most active areas of AI accountability work happening right now. Several states have passed or proposed AI-specific laws in the past year, and this framework would put a ceiling on that.
This is also a direct response to the EU AI Act's approach - the administration is clearly betting that lighter-touch federal rules will let U.S. companies move faster than their international competitors.
⚠️ A Senior Journalist Published AI-Hallucinated Quotes - and Got Suspended
Here's a story that's going to echo through every newsroom for the next few months.
Peter Vandermeersch, a senior journalist at Mediahuis - the publisher behind De Telegraaf and the Irish Independent - was suspended after admitting he used AI that "wrongly put words into people's mouths." His own description: he "fell into the trap of hallucinations." This is the media ethics failure that's been predicted since generative AI hit newsrooms, and now it has a name and a face.
The Fabricated-Quote Problem in Journalism
What makes this particularly striking is that Vandermeersch wasn't a junior staffer - he was the former editor-in-chief of the same publication that investigated him. If someone at that level of experience can "fall into the trap," that tells you something about how easy it is to miss hallucinated content when it's plausibly formatted.
This story pairs with the Hachette news from yesterday: the publisher pulled a horror novel called Shy Girl after widespread speculation online that author Mia Ballard relied heavily on AI. The book was already published in the UK in November 2025 - Hachette cancelled the US launch after the allegations gained traction. Two major publishing industry AI controversies in the same week is not a coincidence - it reflects an industry still figuring out where the lines are.
🛠️ NVIDIA's Nemotron-Cascade 2 Is the Efficiency Story of the Week
On the open-source front, NVIDIA quietly dropped something worth paying attention to. Nemotron-Cascade 2 is a 30 billion parameter Mixture-of-Experts (MoE) model - but here's the key number: it only activates 3 billion parameters per inference. That's the MoE efficiency trick in action - you get a big model's knowledge base without the compute cost of running all of it every time.
Gold Medal Benchmarks at a Fraction of the Compute
NVIDIA says Nemotron-Cascade 2 is the second open-weight model to hit Gold Medal-level performance on the 2025 competition benchmarks. The focus is specifically on reasoning and agentic tasks - which makes sense given where the industry is heading. This isn't a model designed to write better marketing copy; it's aimed at multi-step problem solving and autonomous agent workflows.
For developers building with open-source AI, this is a meaningful release. The combination of open weights, low active parameter count, and strong reasoning benchmarks makes it a legitimate candidate for teams that want capable local inference without a rack of GPUs. Worth pulling up the model card if you're evaluating options right now - and our token calculator can help you think through the cost side if you're comparing API vs. local deployments.
🏢 Atlassian's AI Teammates Couldn't Save Their Human Coworkers' Jobs
The Guardian talked to former Atlassian employees who were laid off - people who actually used the company's AI agents daily. One former Sydney staffer summed it up perfectly: "These AI agents have been really, really helpful. But you couldn't use something like that to replace an actual human worker." Then they got replaced anyway.
The Productivity Paradox Playing Out in Real Time
This is the future of work tension in one story: AI tools made employees more productive, the company captured that value, and then reduced headcount. The workers who got good at using the tools didn't get protected by that skill - they got made redundant by the same argument. "We're more efficient now" cut in one direction.
Former staff describe being let go without clear explanation after consistently strong performance reviews. Several say they're still looking for closure months later. It's a human story behind a corporate efficiency narrative - and one that's going to repeat across the industry.
Speaking of building things faster - if you're experimenting with AI-assisted development, 60sec.site lets you spin up a professional website in under a minute using AI. Worth a look if you're prototyping or launching something lean.
📍 Quick Catch-Up: Anthropic's Court Battle Gets More Complicated
One more worth flagging briefly. We covered the Anthropic vs. Pentagon situation earlier this week, but Friday brought a significant new development: Anthropic submitted sworn court declarations revealing that the Pentagon told the company the two sides were "nearly aligned" just one week before the Trump administration declared the relationship a national security risk.
Anthropic also directly denied the DoD's claim that it could "manipulate" AI models mid-conflict, calling the allegation a technical misunderstanding. The gap between "nearly aligned" and "unacceptable risk" in seven days is a remarkable shift - and the sworn declarations suggest Anthropic is treating this fight seriously. You can get deeper on our military AI coverage for the full timeline.
🌎 Trivia Reveal
The answer is 3 billion! NVIDIA's Nemotron-Cascade 2 has 30 billion total parameters but only activates 3 billion at any given time - that's the Mixture-of-Experts architecture doing its thing. The model routes each input through a small subset of its total capacity, which is why it can deliver strong reasoning performance without burning through compute like a dense 30B model would.
💬 Quick Question
The Atlassian story got me thinking: are you seeing AI tools actually change headcount decisions where you work, or does it still feel like productivity theater? Hit reply and tell me what you're actually observing - not the official company line, but the real situation on the ground. I read every response.
That's it for today. Find more daily AI coverage at dailyinference.com - and see you tomorrow.