🤖 Daily Inference

Good morning! Today we're covering Alibaba's massive new AI model designed specifically for agents, a landmark lawsuit against Google over voice cloning in NotebookLM, ByteDance backing down after Hollywood threatens legal action, and the UK extending online safety rules to AI chatbots. Plus, OpenAI brings on the creator of OpenClaw, and a KPMG partner gets fined for using AI to cheat on an AI training test. Let's dive in.

🚀 Alibaba Releases Massive 397B Parameter Model for AI Agents

Alibaba's Qwen team just dropped Qwen3.5-397B, a mixture-of-experts (MoE) model that's specifically optimized for AI agents. While the model contains 397 billion parameters total, it only activates 17 billion parameters for any given task, making it computationally efficient while maintaining powerful capabilities. The model supports an impressive 1 million token context window, meaning it can process and remember massive amounts of information in a single conversation.

What makes this release particularly interesting is its focus on agentic workflows. MoE architecture works by routing different types of queries to specialized "expert" sub-networks, which allows the model to handle diverse tasks without the computational cost of activating all parameters. This makes it ideal for AI agents that need to perform multiple different operations - from web searches to API calls to complex reasoning - within a single task.

The million-token context window is a game-changer for enterprise applications. It means agents can analyze entire codebases, lengthy documents, or maintain context across complex multi-step operations without losing track of earlier information. Alibaba is positioning this as a direct competitor to models from OpenAI and Anthropic in the rapidly growing AI agent market.

⚖️ NPR Host Sues Google Over NotebookLM Voice Cloning

Longtime NPR host David Greene has filed a lawsuit against Google over the company's NotebookLM feature, which he claims uses an AI-generated voice that sounds strikingly similar to his own. NotebookLM, Google's AI-powered research assistant, includes an "Audio Overview" feature that converts notes and documents into podcast-style conversations between two synthetic hosts. Greene alleges that one of these voices is an unauthorized imitation of his distinctive broadcasting voice.

This lawsuit represents a potentially landmark case in voice AI and digital rights. While many AI companies have faced copyright lawsuits over training data, voice imitation raises different legal questions around personality rights and the right of publicity. Greene's recognizable voice has been his professional trademark for decades, and the lawsuit argues that Google's use of a similar-sounding AI voice amounts to commercial exploitation without permission or compensation.

The case could set important precedents for the AI industry. As voice cloning technology becomes increasingly sophisticated and accessible, questions about consent, attribution, and compensation for voice likeness are becoming critical. Scarlett Johansson previously raised similar concerns when OpenAI released a voice assistant that sounded remarkably like her, though no lawsuit materialized in that case. Greene's legal action suggests voice professionals are prepared to defend their vocal identity in court.

🎬 ByteDance Adds Safeguards to Video AI After Disney Threat

ByteDance, TikTok's parent company, is rolling back some capabilities of its Seedance 2.0 AI video generator after facing threats of legal action from Hollywood studios, particularly Disney. The AI tool had gained notoriety for producing remarkably realistic deepfakes of celebrities including Tom Cruise and Brad Pitt, raising alarm bells throughout the entertainment industry. Disney reportedly threatened legal action over potential copyright infringement and unauthorized use of actor likenesses.

In response, ByteDance announced it will implement stronger safeguards to prevent the creation of videos featuring recognizable public figures without authorization. The company is also adding watermarking technology to make AI-generated content more identifiable and introducing stricter content moderation policies. These changes represent a significant retreat from Seedance 2.0's initial launch, which positioned the tool as having minimal restrictions compared to competitors.

The Hollywood confrontation highlights the tension between AI innovation and intellectual property rights. While AI video generation technology continues to advance rapidly, the legal framework for protecting actor likenesses and preventing unauthorized deepfakes remains underdeveloped. ByteDance's quick pivot suggests companies are increasingly wary of legal exposure, even as they race to release cutting-edge AI capabilities. The incident also raises questions about how AI companies will balance innovation with responsibility as deepfake technology becomes more accessible.

🛡️ UK Extends Online Safety Rules to AI Chatbots

UK Prime Minister Keir Starmer announced yesterday that AI chatbots will now fall under the country's Online Safety Act, following a scandal involving Grok, Elon Musk's AI assistant. The decision means chatbot companies could face substantial fines or even be banned from operating in the UK if they fail to protect children from harmful content. The move makes the UK one of the first countries to explicitly regulate AI conversational systems under child safety legislation.

The regulatory expansion comes after reports that some AI chatbots provided inappropriate responses to minors or failed to properly flag concerning conversations. Under the new framework, chatbot providers must implement age verification systems, content filtering for minors, and mechanisms to detect and report potential harm. Companies will also be required to conduct regular risk assessments and maintain transparency about how their AI systems handle conversations with young users.

This represents a significant regulatory burden for AI companies operating in the UK market. While major players like OpenAI, Google, and Anthropic have resources to implement comprehensive safety systems, smaller AI startups may struggle with compliance costs. The UK's approach could influence AI regulation in other countries, as policymakers worldwide grapple with how to protect children in an era of increasingly sophisticated conversational AI.

🤝 OpenClaw Founder Peter Steinberger Joins OpenAI

Peter Steinberger, the developer who created OpenClaw - a viral open-source clone of Anthropic's Computer Use feature - has joined OpenAI. OpenClaw gained significant attention in the AI community for demonstrating that computer control capabilities could be replicated and improved upon by independent developers. The tool allowed AI models to interact with computer interfaces, performing tasks like browsing, clicking, and typing, similar to Anthropic's Claude Computer Use feature.

Steinberger's hire is particularly notable given that some AI experts have questioned whether OpenClaw represents as significant a breakthrough as initial hype suggested. Critics pointed out that while impressive as an open-source project, the underlying techniques weren't fundamentally novel. However, OpenAI's decision to bring Steinberger aboard suggests the company values his implementation skills and understanding of computer control systems, which are crucial for developing practical AI agents.

The move continues OpenAI's pattern of acquiring talent from successful open-source projects and competitor companies. It also signals that computer control and agentic AI remain strategic priorities for OpenAI as it develops future versions of GPT and ChatGPT. For more on OpenAI's recent moves, check out our OpenAI coverage.

😬 KPMG Partner Fined for Using AI to Cheat on AI Training Test

In an ironic twist, a KPMG partner has been fined for using artificial intelligence to cheat during an AI training course assessment. The partner, who works at one of the world's largest accounting and consulting firms, reportedly used AI tools to complete test questions that were specifically designed to evaluate understanding of AI ethics, governance, and responsible use. The incident was discovered through the firm's monitoring systems, which detected suspicious patterns in test responses.

KPMG takes professional development seriously, requiring partners and employees to complete mandatory training on emerging technologies, including AI. The training courses are meant to ensure that client-facing professionals understand both the capabilities and limitations of AI systems, particularly around ethical AI practices. Using AI to bypass this training not only violates professional standards but also undermines the purpose of ensuring advisors can guide clients responsibly.

The case highlights a growing challenge for organizations: as AI tools become more accessible and powerful, preventing their misuse in educational and assessment contexts becomes increasingly difficult. It also raises questions about how companies should design AI training programs when the very tools being studied can be used to circumvent learning objectives. The incident serves as a reminder that academic integrity concerns extend beyond classrooms into professional settings.

💬 What Do You Think?

With NPR's David Greene suing Google over voice cloning and ByteDance facing Hollywood's wrath over celebrity deepfakes, we're seeing the collision between AI innovation and intellectual property rights play out in real time. Do you think current copyright and personality rights laws are adequate for the AI age, or do we need entirely new legal frameworks? I'm especially curious whether you think voice and likeness protections should differ from traditional copyright. Hit reply and let me know your thoughts - I read every response!

That's all for today! If you're looking to build a website quickly, check out 60sec.site, an AI-powered website builder that can get you online in minutes. And remember to visit dailyinference.com for more AI news delivered daily. Stay curious!

Keep Reading