🤖 Daily Inference
Happy Sunday! We've got a packed edition today - Google just dropped a Gemini model with record-breaking benchmark scores, NVIDIA has released a fascinating open-source robot world model trained on tens of thousands of hours of human behavior, and Bernie Sanders is asking some big questions about whether anyone is actually in control of the AI revolution. Let's get into it.
⚡ Google's Gemini 3.1 Pro Just Set Another Benchmark Record
Google isn't letting up on the gas. The company has released Gemini 3.1 Pro, a new model that's posting record benchmark scores - including a striking 77.1% on the ARC-AGI-2 reasoning benchmark, which is specifically designed to test general reasoning and fluid intelligence in AI systems. This is a significant leap and puts Gemini 3.1 Pro at the top of the leaderboard in this particularly challenging test.
The model also features a 1 million token context window, meaning it can process and reason over enormous amounts of text in a single interaction - think entire codebases, lengthy legal documents, or research archives. This isn't just a spec sheet win: a million-token context has real-world implications for AI agents that need to operate over long, complex tasks without losing track of earlier information.
TechCrunch notes that Google has now posted record benchmark scores again - a pattern that's becoming a calling card for the company's AI division. Whether these benchmark gains translate cleanly to real-world performance is always worth scrutinizing, but the raw numbers suggest Google is firmly in the race for the most capable AI models available today. We've been tracking Google's AI momentum closely - see all our Google Gemini coverage here.
🤖 NVIDIA's DreamDojo Trains Robots on 44,711 Hours of Human Video
NVIDIA has just open-sourced DreamDojo, a robot world model trained on a staggering 44,711 hours of real-world human video data. The idea here is fascinating: instead of training robots purely in simulation or from scratch with robotic sensors, DreamDojo learns from watching humans move through the world - giving it a grounded understanding of how physical environments work.
World models are essentially the AI's internal simulation of how the world works - they allow a robot (or agent) to predict what will happen next based on what it currently sees and does. By grounding DreamDojo in tens of thousands of hours of real human behavior rather than synthetic data alone, NVIDIA is betting that richness and variety of real-world experience translates to more robust and adaptable robot behavior.
Releasing this as open source is notable. It lowers the barrier for robotics researchers and startups worldwide to build on top of NVIDIA's infrastructure, potentially accelerating the entire field. Given how much attention robotics is getting from investors and labs right now, DreamDojo could become a foundational building block for the next generation of physical AI systems. We'll be watching how the research community runs with it.
⚠️ Bernie Sanders: America Has No Idea What's Coming With AI
Senator Bernie Sanders is sounding the alarm. In a stark warning covered by The Guardian, Sanders argued that the United States has no real understanding of the speed or scale of the coming AI revolution - and is calling for the country to "slow this thing down" before it becomes impossible to course correct. His concern isn't that AI is inherently bad, but that the pace of deployment is outrunning any serious democratic oversight.
Sanders' warning lands at a moment when AI companies are raising funds at eye-watering valuations (more on that in a moment), deploying systems into high-stakes domains like healthcare and criminal justice, and operating largely without federal guardrails. The gap between how fast the technology is moving and how slowly regulation tends to work is a genuine structural problem, and it's one that critics across the political spectrum are increasingly raising.
This isn't just political noise. The question of whether democratic institutions can respond to AI fast enough to shape its development - rather than simply clean up after its harms - is arguably the defining policy question of our era. If you're interested in how AI regulation is evolving, this story is essential reading.
🧠 Mental Health Experts: Google's AI Overviews Are 'Very Dangerous'
Mental health charity Mind has launched a formal inquiry into the impact of AI on mental health, following a Guardian investigation into Google's AI Overviews - the AI-generated summaries that appear at the top of search results. A Mind mental health expert described the feature as "very dangerous," raising concerns about vulnerable people receiving potentially harmful AI-generated responses when searching for mental health information.
The core issue is that AI Overviews are designed to give quick, confident-sounding answers - but mental health is an area where nuance, professional context, and individual circumstances matter enormously. A system optimized to summarize web content quickly may flatten or distort complex clinical guidance in ways that could cause real harm, especially to people in crisis who are turning to Google for immediate help.
Mind's inquiry signals that civil society is starting to hold AI features accountable in domains where errors carry human cost. This follows a broader pattern we've been tracking: AI moving fast into sensitive areas like mental health technology without adequate safety review. We previously covered Google's medical AI missteps - the pressure is clearly mounting.
🏢 OpenAI Finalizing $100B Deal at $850B+ Valuation - And Nvidia May Invest $30B
OpenAI is reportedly closing in on a $100 billion funding deal at a valuation exceeding $850 billion - a number that would make it one of the most valuable private companies in history. Separately, The Guardian reports that Nvidia is in discussions to invest as much as $30 billion in OpenAI's next funding round, which would be an extraordinary deepening of the relationship between the two companies at the heart of the AI boom.
The strategic logic for Nvidia is clear: OpenAI is the single largest consumer of Nvidia's chips, and cementing that relationship through a major equity stake would lock in both commercial and strategic ties. For OpenAI, Nvidia's investment would be more than just cash - it would signal hardware-level alignment with the company that controls the infrastructure AI runs on.
At $850B+, OpenAI would be valued higher than many Fortune 50 companies, despite still being relatively young and in the middle of a complex transition from nonprofit to for-profit structure. The scale of capital flowing into AI right now is genuinely unprecedented - and it raises real questions about what kind of accountability structures exist at this level of financial power. Follow all our OpenAI coverage to stay up to date.
🌐 Perplexity Retreats From Ads - What It Signals About AI Search
Perplexity AI is pulling back from its advertising strategy, and Wired reports that this signals a broader strategic shift for the AI search startup. When Perplexity first introduced ads, it positioned them as a key part of its monetization model - but the retreat suggests the company is reconsidering how it balances revenue generation with its core promise of clean, direct answers.
This matters because Perplexity's entire value proposition is built on being a faster, more trustworthy alternative to traditional search engines. Ads introduce the same tension that has long plagued Google: when search is ad-supported, results can be influenced by commercial interests. For a product that markets itself on accuracy and directness, that's a particularly uncomfortable contradiction.
The retreat from ads may point toward a subscription-first or enterprise-focused model going forward. It's a reminder that AI business models are still very much being figured out in real time. Interestingly, if you're building your own AI-powered web presence and thinking about monetization, tools like 60sec.site let you spin up an AI-built website in under a minute - useful for anyone trying to establish a foothold in this fast-moving space without heavy upfront investment.
💬 What Do You Think?
Bernie Sanders' warning this week cuts to something I've been thinking about a lot: do you think democratic governments are capable of meaningfully regulating AI at the speed it's developing? Or is the gap between legislative timelines and AI deployment too wide to bridge? Hit reply and let me know your honest take - I read every response and genuinely love hearing from you.
That's a wrap for Sunday, February 22nd. From record-breaking Gemini scores to open-source robot models to billion-dollar funding rounds, AI's pace this week was characteristically relentless. Share this edition with someone who needs to be keeping up - and visit dailyinference.com for daily coverage of everything happening in AI. See you tomorrow!