☀️ TRENDING AI NEWS

  • 🚨 Baidu's Apollo Go robotaxis froze mid-trip in Wuhan, stranding passengers and causing traffic chaos

  • 🏢 Oracle cuts thousands of jobs while ramping up AI infrastructure spending

  • 🛠️ Cognichip raises $60M to use AI for designing the chips that power AI

  • ⚠️ UK teachers report two-thirds of students are losing critical thinking skills due to AI use

Something quietly broke in Wuhan yesterday - and it wasn't just one car.

Dozens of Baidu's autonomous taxis simultaneously froze in traffic, passengers locked inside, highways snarled, and at least one accident reported. It's the kind of failure scenario that keeps autonomous vehicle researchers up at night - and it happened at scale, in a real city, to real people.

But that's not the only story worth your attention today. There's a new survey out of England that asks an uncomfortable question about what AI is doing to the next generation of thinkers. And there's a startup betting that the most impactful place to apply AI isn't at the application layer - it's in the silicon itself.

Let's get into it.

🤓 AI Trivia

Baidu's Apollo Go is one of the world's largest robotaxi operations. In which Chinese city did it first launch commercial driverless rides with no safety driver onboard?

  • 🏙️ Shanghai

  • 🏙️ Beijing

  • 🏙️ Wuhan

  • 🏙️ Shenzhen

The answer is hiding near the bottom of today's newsletter... keep scrolling. 👇

🚗 Baidu's Robotaxis Froze - And Passengers Were Trapped Inside

On Tuesday, multiple Baidu Apollo Go robotaxis simultaneously stopped working in Wuhan, China. Police confirmed receiving multiple reports of the vehicles halting in the middle of roads. Passengers were reportedly trapped inside, others were left stranded on highways, and at least one accident occurred in the resulting traffic snarl.

Scale Makes This More Than a Software Glitch

This wasn't an isolated incident with a single vehicle - it was a fleet-wide failure affecting numerous cars at once. That's the kind of systemic issue that raises serious questions about the redundancies and fail-safes built into large-scale autonomous deployments. Apollo Go has been one of the more aggressive commercial rollouts of driverless taxis globally, operating in multiple Chinese cities.

The timing is uncomfortable for the autonomous vehicles industry broadly. Advocates have spent years arguing that robotaxis are statistically safer than human drivers - but mass freezes that trap passengers aren't part of that safety narrative. Baidu has not yet provided a detailed explanation for what caused the simultaneous failures.

⚠️ Two-Thirds of UK Teachers Say AI Is Eroding Students' Thinking Skills

A new survey of secondary school teachers in England has surfaced a finding that's hard to dismiss: two-thirds of teachers reported observing a decline in critical thinking and core abilities among students who regularly use AI tools. Writing skills, problem-solving, and even basic spelling are all cited as areas of decline - with teachers noting that students no longer feel the need to spell because voice-to-text handles it for them.

Dependence vs. Delegation - Where's the Line?

This is the tension at the heart of AI in education: there's a meaningful difference between using AI as a tool to extend your capabilities and using it as a crutch that replaces the cognitive work entirely. Teachers seem to be observing the latter happening in real classrooms, right now.

The concern isn't new - educators have raised versions of this argument since calculators arrived in classrooms. But the scope of what AI can offload is vastly larger than any previous tool. When a student can bypass not just arithmetic but reasoning, writing, and research simultaneously, the question of what skills actually get built becomes genuinely urgent. If you've been following our coverage on AI's impact on learning and creativity, this survey adds a sobering data point.

🔬 Cognichip Raises $60M to Let AI Design Its Own Hardware

Here's a genuinely recursive idea: use AI to design the chips that make AI faster. That's the pitch from Cognichip, a startup that just closed a $60 million funding round to build AI-driven chip development tools. The company claims it can reduce chip development costs by more than 75% and cut timelines by more than half.

Why Chip Design Is the Right Bottleneck to Attack

Designing a modern chip is extraordinarily expensive and slow - a single new design can cost hundreds of millions of dollars and take years. If AI can compress that process meaningfully, the downstream effects on the entire AI infrastructure stack could be significant. Faster design cycles mean more hardware iterations, which means faster capability improvements.

The semiconductor industry has been experimenting with ML-assisted chip design for years - Google's work on floorplanning with reinforcement learning is a notable example. Cognichip is betting there's a much larger opportunity to automate the full development pipeline. With AI hardware demand showing no signs of slowing, the timing for this pitch is sharp.

🏢 Oracle Cuts Jobs to Fund Its AI Infrastructure Bet

Oracle has begun laying off thousands of employees - out of a workforce of roughly 162,000 - as it redirects capital toward AI infrastructure spending. The $420 billion company, chaired by Trump ally Larry Ellison, started the cuts this week and framed them as part of a broader strategic pivot to reassure investors that its AI infrastructure investments will pay off.

The Classic AI-Era Tradeoff

Oracle isn't alone in this pattern. Across big tech, companies are trimming human headcount while simultaneously ramping up capital expenditure on data centers and AI compute. It's a recurring theme in the current cycle: the economic impact of AI investment is showing up as job displacement before it shows up as productivity gains.

For Oracle specifically, the pressure is to prove that its cloud and AI infrastructure play can compete with AWS, Azure, and Google Cloud. The layoffs signal urgency - but whether the infrastructure bet delivers is still an open question. If you want to build fast on that kind of infrastructure without the overhead, tools like 60sec.site let you spin up an AI-powered website in seconds - no waiting for enterprise procurement cycles.

🛠️ Elgato's Stream Deck Now Takes Orders From AI Agents

In a genuinely fun bit of product news: Elgato has released Stream Deck software version 7.4, which adds Model Context Protocol (MCP) support. That means AI assistants - including Claude, ChatGPT, and Nvidia G-Assist - can now find and trigger Stream Deck actions on your behalf. You set up the actions, the AI activates them.

MCP Is Becoming the Universal AI Connector

This is a small but telling example of how MCP is spreading through the developer tools ecosystem. Originally championed by Anthropic as a standard for connecting AI models to external systems, MCP is now showing up in hardware peripherals - which says a lot about where the industry thinks AI agents are heading. If everything becomes an MCP server, agents can control the full stack of your digital environment.

For streamers and content creators, this is mostly a convenience feature today. But the broader implication - AI agents that can actuate physical and software controls on demand - is worth watching closely as AI agents become more capable.

📊 AI Benchmarks Are Broken - MIT Says We Need to Rethink Everything

MIT Technology Review published a sharp piece this week arguing that the entire framework we use to evaluate AI models is fundamentally flawed. The core problem: benchmarks are built around AI vs. human comparisons on isolated tasks - chess, coding, math - when real-world AI deployment looks nothing like that. Models are now embedded in systems, working alongside humans, across long workflows with ambiguous goals.

What Good Evaluation Actually Looks Like

The piece argues we need benchmarks that measure AI performance in context - how a model performs as part of a team, over time, on tasks that don't have a clean right answer. That's a much harder evaluation problem, but it's the one that actually reflects how AI research gets deployed. The current benchmark arms race - where labs optimize heavily for MMLU or HumanEval scores - may be producing models that look impressive on paper but underperform in production.

This matters because benchmark scores are how the industry communicates progress - to investors, to enterprise buyers, to regulators. If those scores are measuring the wrong thing, a lot of consequential decisions are being made on bad information. Check out our AI benchmarks coverage for more context on how this conversation has been evolving.

🌎 Trivia Reveal

The answer is Wuhan! Baidu's Apollo Go launched its first fully driverless commercial robotaxi service - no safety driver onboard - in Wuhan in 2022. The city has been Apollo's primary testing and commercial deployment ground, which makes yesterday's fleet-wide freeze all the more significant. It happened in the city where Baidu has the most experience and the deepest operational history.

💬 Quick Question

The UK teacher survey today got me thinking: do you feel like AI has made you sharper or softer as a thinker? Like - are you finding yourself reasoning through problems more carefully because AI gives you a fast first draft to critique, or are you offloading more than you'd like to admit? Hit reply and tell me honestly - I read every response and I'm genuinely curious where people land on this one.

That's it for today - see you tomorrow with more. And if you want to stay across everything in between, dailyinference.com has the full archive and daily coverage.

Keep Reading