🤖 Daily Inference

Good morning! Today's AI landscape is anything but quiet. Google is scrambling to remove dangerous health advice from its AI summaries, Meta just signed deals for enough nuclear power to run a small country, and governments worldwide are taking action against X's AI image generator. Let's dive into what's happening.

⚠️ Google Removes AI Health Summaries After Safety Concerns

Google has been forced to remove some of its AI-generated health summaries after a Guardian investigation revealed they were putting users at risk. The company's AI Overviews feature, which provides instant answers at the top of search results, has been generating dangerous and alarming health advice that could directly harm people seeking medical information.

The issue highlights a critical problem with deploying AI systems at scale without adequate safety measures. When users search for health-related queries, Google's AI has been providing summaries that contradict established medical guidance or offer potentially harmful recommendations. This isn't just a theoretical concern—people actively use Google for health information in moments of uncertainty or crisis, making accuracy absolutely essential.

The removal of these summaries represents a significant step back for Google's AI ambitions. The company has been aggressively rolling out AI Overviews to compete with ChatGPT and other conversational AI tools, but this incident underscores the tension between moving fast and ensuring safety. For users, it's a reminder that AI-generated health information should always be verified with healthcare professionals, regardless of how authoritative it appears.

🏢 Meta Signs Nuclear Power Deals for AI Infrastructure

While Google grapples with AI safety, Meta is making massive bets on AI infrastructure. The company just announced deals with three nuclear energy companies to secure more than 6 gigawatts of power—enough electricity to power millions of homes. One of these partnerships includes Bill Gates' nuclear startup, TerraPower, signaling a major shift in how tech companies plan to fuel their AI ambitions.

The scale of these deals reflects the staggering energy demands of modern AI systems. Training large language models and running inference at scale requires enormous computational resources, which translates directly into electricity consumption. By investing in nuclear power, Meta is betting on a carbon-free energy source that can provide the consistent, high-volume power that data centers demand. This isn't just about meeting current needs—it's about positioning for a future where AI workloads continue to grow exponentially.

The move puts Meta ahead of competitors in securing dedicated power infrastructure, but it also raises questions about the environmental and social costs of AI development. While nuclear power doesn't produce carbon emissions, it comes with its own challenges around waste disposal and safety. For the AI industry, this represents a broader trend: as models become more powerful and deployment scales up, the infrastructure requirements become a competitive battleground just as important as the algorithms themselves.

⚠️ Global Backlash: Indonesia Blocks Grok Over Deepfakes

Speaking of AI controversies, X's Grok AI image generator is facing unprecedented regulatory action. Indonesia has completely blocked access to Grok after the tool was used to create non-consensual, sexualized deepfakes. The ban represents one of the first major government actions against a generative AI tool and could signal the beginning of a broader regulatory crackdown.

The problem stems from Grok's relatively permissive content policies compared to other AI image generators. While tools like DALL-E and Midjourney have implemented strict safeguards against creating explicit or harmful content, Grok has taken a more hands-off approach under the banner of free expression. This has made it the go-to tool for creating 'undressing' images and other forms of non-consensual intimate imagery, with particularly disturbing reports of women in hijabs and saris being targeted.

Indonesia's ban isn't happening in isolation. The UK government has threatened X with a potential ban and ordered the platform to address the issue, while Democrats in the US have asked Apple and Google to remove X from their app stores. X's response—initially restricting image generation to paying subscribers only—has been criticized as inadequate and even cynical, essentially putting harmful capabilities behind a paywall rather than removing them entirely. For AI developers and policymakers, this crisis illustrates the real-world harms that can emerge when powerful generative tools are deployed without adequate safeguards.

🛠️ OpenAI Asks Contractors to Upload Real Work Documents

In a move raising both eyebrows and privacy concerns, OpenAI is reportedly asking contractors to upload real work documents from their past jobs to help evaluate the performance of AI agents. The request is part of OpenAI's efforts to test how well their AI systems can handle real-world professional tasks, but it's sparking debate about data privacy and the boundaries of AI training.

According to reports from both Wired and TechCrunch, OpenAI wants contractors to provide authentic workplace materials—think spreadsheets, presentations, code repositories, and internal documents—so their AI agents can be tested on genuine professional scenarios rather than synthetic examples. The company argues this approach provides more realistic benchmarks for evaluating AI capabilities. However, it raises immediate questions about confidentiality, intellectual property, and whether contractors have the right to share materials from previous employers.

The controversy highlights a broader challenge in AI development: how do you train and evaluate systems on realistic data without compromising privacy or ethics? While OpenAI likely has contractors sign agreements about data usage, many former employers' confidentiality agreements would explicitly prohibit sharing internal documents with third parties. As AI companies push toward more capable 'agent' systems that can perform complex professional tasks, the need for realistic training and evaluation data will only intensify, potentially creating more conflicts between AI advancement and traditional workplace privacy norms.

🚀 CES 2026: The Rise of 'Physical AI' and Robotics

While regulatory battles and infrastructure deals dominate headlines, CES 2026 just wrapped up in Las Vegas with a clear message: AI is going physical. The tech industry's biggest showcase was dominated by robots, autonomous systems, and what's being called 'physical AI'—artificial intelligence that interacts with and manipulates the real world rather than just processing data.

From Nvidia's latest AI chip announcements to AMD's new processors optimized for AI workloads, hardware companies are betting big on embodied intelligence. The term 'physical AI' has become this year's buzzword, encompassing everything from warehouse robots and autonomous vehicles to AI-powered manufacturing systems. Unlike chatbots and image generators, these systems need to understand three-dimensional space, predict physical interactions, and operate safely in unpredictable environments—challenges that require fundamentally different approaches to AI development.

The emphasis on physical AI at CES signals a maturation of the AI industry. After years focused primarily on language and image generation, major players are now targeting real-world automation and robotics as the next frontier. This shift has massive implications for manufacturing, logistics, healthcare, and transportation. It also means AI systems will increasingly need to prove their safety and reliability in physical contexts where mistakes can cause real harm—a challenge that makes today's other stories about AI safety and regulation even more relevant.

If you're thinking about building your own AI-powered project or just want to quickly launch a website to showcase your work, check out 60sec.site—an AI website builder that can get you online in under a minute. And for daily AI news delivered to your inbox, visit dailyinference.com.

🎮 Baldur's Gate 3 Studio Takes Stand Against AI

Not all AI news is about adoption and expansion. Larian Studios, the developer behind the critically acclaimed Baldur's Gate 3, announced yesterday that it won't use AI for concept art or writing. The statement, made during a Reddit AMA, positions the studio firmly on one side of gaming's heated AI debate.

Larian's decision comes as many game studios experiment with AI tools for everything from generating background art to writing dialogue variations. The studio's leadership emphasized their commitment to human creativity and craftsmanship, arguing that the artistic vision and emotional depth that made Baldur's Gate 3 successful can't be replicated by AI systems. This stance resonates with many in the creative community who view generative AI as a threat to artistic jobs and creative authenticity.

The announcement is significant because Larian isn't a small indie studio—Baldur's Gate 3 was one of 2023's biggest commercial and critical successes, proving that games developed entirely with human talent can still dominate the market. Their position suggests that AI adoption in creative industries isn't inevitable, and that there may be competitive advantages to marketing products as 'AI-free.' As more companies across industries face pressure to incorporate AI, Larian's choice demonstrates that opting out remains a viable—and potentially differentiating—strategy.

💬 What Do You Think?

Today's stories paint a complicated picture of AI development—simultaneous progress and problems, adoption and resistance. I'm curious: Do you think the benefits of rapidly deploying AI tools outweigh the risks we're seeing with issues like Google's health summaries and Grok's deepfakes? Or should companies slow down until stronger safeguards are in place? Hit reply and let me know your thoughts—I read every response!

Thanks for reading today's edition. If you found this valuable, forward it to a colleague who's trying to keep up with AI developments. See you tomorrow with more from the frontlines of artificial intelligence.