☀️ TRENDING AI NEWS

  • 🏢 Google scraps its crowdsourced health advice AI feature after mounting backlash

  • 🤖 ByteDance quietly pauses global launch of Seedance 2.0 video generator over legal concerns

  • 🚨 Wired exposes Telegram channels recruiting real people to front AI scam operations

  • 🛠️ LangChain releases Deep Agents runtime for complex multi-step AI agent workflows

Something quietly shifted in the AI safety conversation yesterday - and it wasn't just one story. It was three completely separate developments all pointing at the same uncomfortable truth: AI's fastest-moving applications are consistently outrunning the guardrails meant to keep them safe. Let's get into it.

🤓 AI Trivia

What was the name of the AI system that defeated world Go champion Lee Se-dol in 2016, a milestone widely seen as a turning point in AI history?

  • 🎯 AlphaZero

  • 🎯 AlphaGo

  • 🎯 DeepBlue

  • 🎯 AlphaStar

The answer is hiding near the bottom of today's newsletter... keep scrolling. 👇

🏥 Google Quietly Killed Its Amateur Health AI - And Nobody Told the Users

Google has scrapped "What People Suggest" - a feature that displayed crowdsourced health tips from anonymous users alongside its AI search results. The company had framed the feature as showing "the potential of AI to transform health outcomes" when it launched. It has now quietly pulled it amid growing scrutiny over using unvetted advice from strangers to inform medical decisions.

From 'AI Health Transformation' to Silent Removal

The feature essentially surfaced tips from random people on the internet - no credentials required - and presented them alongside AI-generated health summaries. The problem is obvious in retrospect: combining amateur advice with the authority of a Google search result is a genuinely dangerous combination.

This is the second time in recent months that Google has had to retreat on health-adjacent AI features. Remember when we covered Google's AI Overviews debacle? That story showed just how quickly confidence in AI health products can collapse. The pattern is becoming familiar: launch fast, face backlash, quietly retract.

🚨 The AI Scam Economy Has a New Job Listing: Your Face

Wired has reviewed dozens of Telegram channels advertising jobs for "AI face models" - real people, mostly women, being recruited to appear in AI-generated video content. The catch: many of these faces are being used to run romance scams and financial fraud operations at industrial scale. Some listings promise up to 100 video calls per day.

A Gig Economy Built on Deception

The setup is chillingly simple. Scammers recruit real people to record short videos or take photos, which are then fed into AI face-swapping tools to create convincing personas for deepfake-powered scam accounts. Victims on the other end believe they're talking to a real person. The people lending their faces often don't fully understand how their images will be used.

The scale here is staggering. These aren't one-off operations - they're organized, professionalized, and growing. Digital safety researchers have been warning about AI-enabled fraud for years, but the industrialization of it - complete with HR-style job listings and performance targets - is a new and deeply unsettling development.

🎬 ByteDance Hits Pause on Seedance 2.0 - Lawyers Are Involved

ByteDance has reportedly delayed the global launch of Seedance 2.0, its AI video generator, as engineers and lawyers work to avoid further legal issues. No specific details about the legal concerns have been disclosed, but the pause suggests the company is dealing with copyright or rights-related complications before a broader rollout.

The Copyright Trap That Keeps Catching Video AI

AI video generators have become a legal minefield. The industry has watched multiple companies face lawsuits over training data, and ByteDance is clearly not willing to launch globally and deal with the fallout after the fact. Pausing before launch - rather than after - is actually a notable shift in how these companies are approaching risk.

For context, ByteDance isn't alone in this squeeze. The AI copyright landscape is getting more contested by the month. A delayed launch is frustrating for users, but it beats a $1B lawsuit six months after release.

🎭 AI Companies Want to Harvest Human Emotion - From Improv Actors

If you've ever done improv theater, apparently one of the world's leading AI companies wants to pay you for it. The Verge reports that AI firms are recruiting improv actors to generate training data focused on authentic human emotion - the ability to portray feelings convincingly, stay in character, and respond naturally to unexpected situations.

The Emotion Gap That Benchmarks Can't Measure

The job listings ask for people with strong creative instincts and the ability to authentically portray emotion - skills that are genuinely hard to capture in standard training datasets. The implication is that current AI models are still missing something fundamental about how humans express and respond to feeling.

There's something poetic about this. While AI is reshaping creative industries at scale, the one thing it still needs is deeply, stubbornly human - the messy, intuitive, unscripted way we actually feel things. Improv actors, of all people, might be the last group you'd expect to become essential AI infrastructure.

🛠️ LangChain Releases Deep Agents for Complex Multi-Step AI Tasks

LangChain has launched Deep Agents, a structured runtime designed specifically for the kinds of AI agent tasks where standard LLM loops fall apart - think multi-step workflows, stateful memory across tasks, and managing multiple artifacts at once. The library is built on top of LangChain's existing agent infrastructure and is designed to fill the gap between simple tool-calling and genuinely complex autonomous workflows.

What Breaks When Agents Go Multi-Step

The core problem LangChain is solving is real and well-documented: most agent frameworks work fine for short, contained tasks but degrade quickly when a job requires planning, memory of previous steps, and managing multiple outputs simultaneously. Deep Agents introduces context isolation - keeping different parts of a complex task from interfering with each other.

For developers building production-grade developer tools on top of LLMs, this kind of structured runtime is something people have been building in-house for months. Having it as a standalone library lowers the barrier significantly. Worth checking out if you're deep in agent development.

Speaking of building fast - if you need to spin up a web presence for an AI project quickly, 60sec.site lets you build a complete website in under a minute using AI. Worth bookmarking if you're a developer or founder in this space.

🌎 Trivia Reveal

The answer is AlphaGo! Developed by Google DeepMind, AlphaGo defeated Lee Se-dol 4-1 in March 2016 in a match that captivated the world. It was a watershed moment for AI - Go was considered far too complex for machines to master, and AlphaGo's victory changed that assumption overnight. DeepMind's founder Demis Hassabis, who was at the center of that moment, is the subject of a newly reviewed biography this week called The Infinity Machine.

💬 Quick Question

Today's stories touched on AI being used to scam people, AI being pulled after safety concerns, and AI trying to learn human emotion from improv actors. My question for you: what's the AI development right now that worries you most? Hit reply and tell me - I read every single response and genuinely want to know what's on your mind.

That's it for today. For more daily coverage, visit Daily Inference - and we'll see you tomorrow with more from the fast-moving world of AI. Stay sharp. 👋

Keep Reading