🤖 Daily Inference

Happy Friday! The AI world didn't slow down this week. Today we're diving into Nvidia's jaw-dropping earnings, Google Gemini booking your Uber for you, a tense standoff between the Pentagon and Anthropic over Claude's safety limits, and the quietly sobering story of workers who are training AI to do their own jobs. Let's get into it.

⚡ Nvidia Posts Another Record Quarter - And Nobody's Surprised Anymore

Nvidia has reported yet another record-breaking quarter, cementing its status as the undisputed hardware backbone of the AI boom. The results come as tech giants continue pouring extraordinary amounts of capital into AI infrastructure, and Nvidia is the primary beneficiary of that spending spree. The company's data center business has been the engine driving its growth, fueled by insatiable demand for its chips from cloud providers, AI labs, and enterprise customers alike.

What makes this quarter particularly notable is the context around it. There have been persistent fears - amplified by reports like the one from a certain AI doomsday analysis that rattled markets earlier this week - that the AI infrastructure buildout might be a bubble about to pop. Nvidia's results suggest those fears haven't slowed spending one bit. Record capital expenditure from major tech firms continues to flow directly into Nvidia's coffers, making it something of a real-time barometer for AI investment confidence. For more on the AI hardware and semiconductor landscape, we've been tracking this closely.

The broader implication is that whatever uncertainty exists at the application layer of AI, the picks-and-shovels play remains as strong as ever. Companies are betting big on infrastructure now, betting that demand for AI compute will only grow.

🛠️ Gemini Can Now Book Your Uber and Order DoorDash - Directly on Your Phone

Google's Gemini has taken a significant step toward becoming a truly useful AI assistant, with new capabilities that allow it to automate multi-step tasks on Android devices. Specifically, Gemini can now book an Uber or order food through DoorDash on your behalf - and it's launching on Samsung's Galaxy S26 and the upcoming Pixel 10. This is exactly the kind of practical, real-world automation that AI assistants have long promised but rarely delivered.

The key here is that Gemini isn't just answering questions or summarizing text - it's actually interacting with third-party apps to complete tasks end-to-end. That's a meaningful technical leap. For years, virtual assistants like Siri and Alexa could set timers and play music, but fell flat when asked to do anything genuinely complex across apps. The timing is also pointed: Apple has struggled publicly with its own AI ambitions for Siri, and Google appears to be deliberately positioning Gemini as the assistant that can do what Apple couldn't. This is a direct shot at Apple Intelligence, and it lands at a moment when Apple is under real pressure.

For users, this marks a real shift in what a phone assistant can do. For developers building apps, it also raises new questions about how AI agents will interact with - and potentially disintermediate - their platforms.

⚠️ The Pentagon Wants Claude to Cross Its Own Red Lines

In one of the more consequential AI stories of the week, US military leaders have been pressuring Anthropic to weaken the safety guardrails built into Claude - its flagship AI model. According to reporting from both The Guardian and The Verge, the Pentagon's team, which includes figures from the private sector brought in under the current administration, has pushed Anthropic to allow Claude to engage with tasks it currently refuses on safety grounds. Anthropic has reportedly resisted these demands.

This standoff cuts to the heart of the central tension in military AI deployment: should AI companies compromise their safety principles to win lucrative government contracts? Anthropic has built its identity around being the "responsible" AI lab - the one that takes safety more seriously than its competitors. Bending to Pentagon pressure would undermine that positioning dramatically, and likely provoke a fierce backlash from safety researchers and ethicists.

The story also raises broader questions about the influence of private equity and Silicon Valley insiders now embedded within the Department of Defense's AI decision-making. The pressure on Anthropic may be a preview of similar battles to come across the industry. We've covered Anthropic extensively - see all our Anthropic coverage here.

🏢 WPP Merges Ad Agencies and Cuts Jobs as AI Reshapes Advertising

WPP, the world's largest advertising group, has announced a radical restructuring - merging multiple ad agencies and cutting jobs in what the company openly frames as a response to the AI threat. It's a stark corporate admission that generative AI is already disrupting the creative and media buying work that has sustained agencies like WPP for decades.

The move reflects a pattern we're seeing across industries: companies aren't waiting to see how AI plays out - they're restructuring now, preemptively, in anticipation of AI handling work that previously required large human teams. For advertising, that means copywriting, campaign ideation, asset creation, and possibly media planning are all on the table. WPP's restructuring signals that agency leadership believes AI won't just augment their workforce - it will significantly shrink it. If you're watching the future of work and job automation closely, this is a significant data point.

For brands and marketers, the question becomes: if your agency is consolidating and cutting the humans who used to work on your account, what does that mean for the quality and creativity of the work? And for the thousands of agency employees who built careers in advertising, the disruption is anything but abstract.

Speaking of building things faster with AI - if you're working on a website, check out 60sec.site, the AI-powered website builder that lets you launch a professional site in under a minute. It's a great example of AI doing the heavy lifting so you can focus on what matters.

🏢 Workers Are Training AI to Do Their Own Jobs - And It's Complicated

One of the most quietly unsettling stories this week comes from The Guardian, which spoke to workers who are actively involved in training AI systems to replicate their own professional skills. The picture that emerges is nuanced and, at times, strange. Some bosses are apparently enthusiastic - almost keenly so - about getting their employees to contribute to systems that could eventually replace them. Workers, meanwhile, describe a mix of resignation, dark humor, and genuine unease.

What makes this story particularly interesting is the "strange mistakes" dimension: workers report that the AI systems they're helping to train make errors that are hard to predict and harder to explain - errors that reveal the gap between what AI can do statistically and what human expertise actually involves. There's also a looming threat that many workers feel but struggle to articulate clearly: they're not sure when the tipping point comes, or whether their contribution to the AI's training will ultimately accelerate or delay their own displacement. For broader context on AI and employment trends, we've been covering this shift extensively.

This story resonates because it's not hypothetical. It's happening now, in ordinary workplaces, and it raises profound questions about labor, knowledge, and what it means to teach a machine to do your job.

🚀 Anthropic Acquires Vercept to Bolster Its Computer-Use AI Capabilities

In a move that signals just how seriously Anthropic is taking the agentic AI space, the company has acquired Vercept, a startup focused on computer-use AI. The acquisition comes with an interesting backstory: Meta reportedly poached one of Vercept's founders before the deal closed, adding a layer of competitive drama to the announcement. Anthropic has been building out its computer-use capabilities - the ability for Claude to actually operate software, browse the web, and take actions on a computer - and Vercept's technology and remaining team are expected to accelerate that work.

Computer-use AI is shaping up to be one of the defining battlegrounds of 2026. OpenAI, Anthropic, and Google are all racing to build agents that can do real work inside real software - not just generate text, but actually click buttons, fill forms, navigate apps, and complete multi-step workflows. Anthropic's acquisition of Vercept suggests it's willing to spend to stay competitive in this race, even as it faces pressure on multiple fronts - including, as we covered above, from the Pentagon.

The Vercept deal also underscores how much talent competition is heating up in the AI agent space. When a major company can lose a founder to Meta before an acquisition even closes, you know the war for talent is intense. For more on AI agents and automation, check out our dedicated coverage.

💬 What Do You Think?

Today's newsletter has a through-line that I can't stop thinking about: who controls what AI is allowed to do? Anthropic is resisting the Pentagon. WPP employees are training AI to replace themselves. Google's Gemini is taking actions on your phone autonomously. So here's my question for you: When an AI company sets safety limits on its model, should governments or military clients have the power to override those limits - or should the company's own ethical guidelines take precedence? Hit reply and let me know what you think. I read every single response, and the best ones sometimes make it into next week's newsletter.

That's a wrap for today! From Nvidia's record earnings to Gemini booking your lunch, to the very human story of workers training their AI replacements - it's been a dense and consequential week. If you found this useful, forward it to a friend who follows AI. And as always, visit dailyinference.com for daily AI news and analysis. See you Monday!

Keep Reading