☀️ TRENDING AI NEWS

  • 🤖 Nvidia CEO Jensen Huang tells Lex Fridman: 'I think we've achieved AGI'

  • ⚠️ AI-generated child sexual abuse material online surged 14% in 2025, with 65% of videos in the most extreme category

  • 🚀 Meta's Darwin Gödel Machine builds AI agents that rewrite their own learning algorithms

  • 🏢 Air Street Capital closes $232M Fund III to back early-stage AI in Europe and North America

Three words from the CEO of the world's most important AI chip company: 'I think we've achieved AGI.' That's Jensen Huang on the Lex Fridman podcast this week - and if you think that's just CEO bluster, the stories below tell a more complicated, and honestly more interesting, story about where we actually are.

🤓 AI Trivia

The term 'AGI' - artificial general intelligence - has been debated for decades. But which famous computer scientist first proposed a test for machine intelligence that many consider a precursor to the AGI concept?

  • 🧠 Marvin Minsky

  • 🧠 Alan Turing

  • 🧠 John McCarthy

  • 🧠 Claude Shannon

The answer is hiding near the bottom of today's newsletter... keep scrolling. 👇

🤖 Jensen Huang Says We've Reached AGI - Here's the Catch

On Monday's episode of the Lex Fridman podcast, Nvidia CEO Jensen Huang made one of the most headline-grabbing statements in recent AI memory: "I think we've achieved AGI."

What Huang Actually Means (And Why It's Contested)

AGI - artificial general intelligence - has no universally agreed definition, and that's exactly the point. As The Verge notes, tech CEOs have increasingly reframed the goalposts to match what current systems can already do. Huang's claim is less a scientific declaration and more a reflection of how blurry the line has become between 'general' and 'narrow' AI.

That ambiguity is doing a lot of work here. It's worth noting this comes just days after OpenAI and others have made similar soft claims. Whether you believe it or not, the fact that Nvidia's CEO - the person selling the hardware that runs all of this - is saying it publicly is a signal worth paying attention to.

⚠️ AI-Generated CSAM Online Jumped 14% Last Year

This one is grim but important. The Internet Watch Foundation - a UK-based safety watchdog - identified 8,029 AI-generated images and videos of child sexual abuse material online in 2025, a 14% rise from the year before. Of the video content found, 65% fell into the most extreme category of abuse depictions.

Realistic and Getting Harder to Detect

The IWF specifically flagged that these weren't clearly fake or cartoonish - the material was described as 'realistic' AI-generated content. That distinction matters because it raises the floor for detection difficulty and increases the potential for misuse in grooming scenarios. You can follow our child safety coverage for ongoing reporting in this space.

The findings arrive as generative image and video tools become more accessible and more capable - a reminder that the same technology powering creative tools is being weaponised in the worst possible ways. Regulation in this space has not kept pace.

🚀 Meta's Agents Now Rewrite Their Own Learning Rules

The dream of recursive self-improvement in AI - where a system doesn't just get better at tasks but gets better at how it learns - has mostly lived in theoretical papers. Meta AI's new Darwin Gödel Machine (DGM) is a real-world attempt to change that.

From Theory to Self-Modifying Reality

The Gödel Machine concept has existed for decades - an AI that can rewrite any part of itself, including its own learning algorithm, if it can prove the rewrite leads to better outcomes. DGM attempts to make this practical: these 'hyperagents' can modify the rules they use to learn and adapt, not just the weights they've accumulated.

It's early-stage research, so the cautious read is: interesting proof of concept, not a runaway self-improving superintelligence. But paired with Jensen Huang's AGI comment above, it illustrates how quickly the frontier is moving in the direction of systems that can improve themselves. If you follow AI research closely, DGM is worth keeping an eye on.

🏢 Europe's Biggest Solo AI Bet: Air Street Closes $232M Fund

London-based Air Street Capital has closed a $232 million Fund III, making it one of the largest solo venture capital funds in Europe. The firm focuses exclusively on early-stage AI investments across European and North American companies.

Solo VC Bets Big in a Crowded Market

Solo VC funds - run by a single general partner rather than a team - are unusual at this scale. Air Street's thesis is focused on technical AI companies at the earliest stages, before the big multi-stage funds pile in. At $232M, they have real firepower to lead rounds and take meaningful ownership in the companies they back.

The timing tracks with a broader wave of AI-focused capital formation in Europe. If you're building an AI startup and haven't explored what's happening on the funding side, it might be a good moment - tools like 60sec.site can help you spin up a professional landing page fast while you focus on product. Getting in front of investors without a polished web presence is a harder sell than it needs to be.

🛠️ Gimlet Labs Is Cracking the Multi-Chip Inference Problem

Here's a problem most people don't think about: running AI inference across different hardware from Nvidia, AMD, Intel, ARM, Cerebras, and others simultaneously is genuinely hard. Each chip has its own stack, quirks, and performance profile. Gimlet Labs just raised an $80 million Series A to solve exactly this.

One Abstraction Layer to Rule Them All

Gimlet's approach lets AI workloads run across multiple chip architectures without developers having to write or maintain separate optimised code for each. For companies running large-scale AI infrastructure, this is the kind of abstraction layer that could meaningfully cut costs and reduce vendor lock-in - especially as the chip market diversifies beyond Nvidia's dominance.

TechCrunch describes it as 'surprisingly elegant,' which in startup coverage is usually code for 'it actually works.' The $80M raise gives them room to prove it at scale.

⚠️ BlackRock's Fink: AI Boom Could Deepen Inequality

Larry Fink - CEO of BlackRock, the world's largest asset manager at $14 trillion in assets - used his annual investor letter to sound a warning: the AI boom risks concentrating its financial rewards in the hands of a very small number of companies and investors.

Fink isn't an outsider critic - he's one of the people most likely to benefit from that concentration. Which is part of what makes this notable. When the people positioned to win from AI's economic upside are flagging the economic impact risk, it tends to land differently than when critics say it.

The argument isn't that AI will destroy jobs wholesale - it's that the returns will accrue to capital and to a handful of platforms, while productivity gains don't necessarily translate into broader wage growth. It's the kind of slow-moving structural concern that doesn't make headlines as easily as a model launch, but probably matters more long-term.

🌎 Trivia Reveal

The answer is Alan Turing! In his landmark 1950 paper 'Computing Machinery and Intelligence,' Turing proposed the famous 'imitation game' - now known as the Turing Test - as a way to evaluate machine intelligence. The paper opened with the question 'Can machines think?' and the concept of machines demonstrating general-purpose intelligence has been central to AI discourse ever since. John McCarthy actually coined the term 'artificial intelligence' in 1956, but it was Turing who laid the conceptual groundwork.

💬 Quick Question

Jensen Huang says AGI is here. Do you buy it - or do you think the goalposts just moved again? Hit reply and tell me where you land on this. I read every response and I'm genuinely curious whether readers are convinced or skeptical.

That's all for today - see you tomorrow with more. For deeper coverage on any of these topics, head to dailyinference.com where we keep the archive running and the tag pages updated in real time.

Keep Reading