☀️ TRENDING AI NEWS
🤖 Mistral AI releases Mistral Small 4 - a 119B MoE model combining instruction, reasoning, and multimodal in one
🏢 Nvidia CEO Jensen Huang projects $1 trillion in Blackwell and Vera Rubin chip orders at GTC 2026
🚨 Three teens sue xAI over Grok-generated child sexual abuse material in landmark class action
⚖️ Merriam-Webster and Encyclopedia Britannica sue OpenAI over alleged copyright theft of 100,000 articles
A $1 trillion chip forecast. A model that does it all. A lawsuit that could define AI's legal reckoning with content creators. And a child safety crisis that puts one of the most controversial AI chatbots squarely in the crosshairs.
Wednesday has a lot going on. Let's get into it.
🤓 AI Trivia
Mistral Small 4 is a Mixture-of-Experts (MoE) model. What does MoE architecture do that makes it more efficient than a standard dense model?
🧠 It compresses all parameters into a single smaller network
⚡ It activates only a subset of its parameters for each input token
🔢 It splits training across multiple GPUs automatically
🗂️ It uses separate models for each language it supports
The answer is hiding near the bottom of today's newsletter... keep scrolling. 👇
⚡ Jensen Huang Just Said $1 Trillion Out Loud
At Nvidia's GTC 2026 conference, CEO Jensen Huang projected that orders for its Blackwell and next-generation Vera Rubin chips will reach $1 trillion. That is not a typo.
The Largest Chip Bet in History
To put that in perspective, Nvidia's total revenue for all of fiscal year 2025 was around $130 billion. A $1 trillion order projection represents a complete step-change in how the company - and the broader market - is thinking about AI infrastructure buildout over the coming years.
Huang also unveiled DLSS 5 at GTC, Nvidia's new generative AI-powered graphics upscaling technology for video games. Instead of just reconstructing frames, DLSS 5 uses generative AI to synthesize entirely new visual detail - blending real-time rendering with AI-generated imagery. Early reactions are split: some call it a leap in photorealism, others worry it unacceptably overrides the artistic intent of game developers.
Either way, Nvidia is pushing generative AI into every corner of computing - not just the data center. Huang suggested the DLSS 5 approach could eventually spread beyond gaming into simulation and industrial design.
If you're tracking the hardware side of the AI race, our AI infrastructure tag page has all the context you need.
🤖 Mistral Small 4 Wants to Do Everything
Mistral AI just released Mistral Small 4, and the headline here is consolidation. Rather than maintaining separate models for different tasks, Mistral has rolled instruction following, reasoning, and multimodal understanding into a single 119 billion parameter Mixture-of-Experts model.
One Model to Replace Four
Previously, Mistral users would reach for Mistral Small for chat tasks, Magistral for reasoning-heavy workloads, and Pixtral for image understanding. Small 4 is designed to replace all three in a single deployment. For teams managing multiple model endpoints, that is a meaningful operational simplification.
The MoE architecture means the model does not activate all 119 billion parameters for every request - only a relevant subset fires for each input. This makes it more compute-efficient in practice than its total parameter count implies, and better suited for serving a wide range of tasks without the overhead of running multiple specialized models.
Mistral has been quietly building one of the most capable open-weight model families in the industry. If you want to follow their releases, the Mistral AI tag page keeps everything in one place.
🚨 xAI Sued Over Grok-Generated Child Sexual Abuse Material
Three Tennessee teenagers - two of them still minors - filed a landmark class action lawsuit against Elon Musk's xAI on Monday, alleging that the Grok AI chatbot used real photos of them to generate and distribute child sexual abuse material (CSAM) without their knowledge.
The Spicy Mode Problem
The lawsuit alleges Grok produced sexualized images and videos after being prompted through its so-called "spicy mode" - and that xAI's leadership knew this was possible when they launched the feature. The plaintiffs are seeking to represent a broader class of anyone who had real images of themselves as a minor altered into sexual content by Grok.
Senator Elizabeth Warren separately pressed the Pentagon this week over its decision to grant xAI access to classified networks, citing Grok's history of harmful outputs as a potential national security risk. The timing of both developments - the lawsuit and the Senate pressure - makes this the worst week xAI has faced since its founding.
This is the first class action of its kind filed against an AI image generator over CSAM. Depending on how it proceeds, it could set a significant legal precedent for how AI companies are held liable for outputs generated from their tools. Follow the child safety tag for ongoing coverage.
⚖️ Merriam-Webster and Britannica Take OpenAI to Court
The dictionary is suing the chatbot. On Friday, Merriam-Webster and Encyclopedia Britannica filed a copyright lawsuit against OpenAI, alleging that the company used nearly 100,000 of their articles to train its large language models without permission.
GPT-4 Allegedly 'Memorized' Their Content Word for Word
The lawsuit's most striking claim is that GPT-4 has effectively "memorized" Britannica's content - generating responses that are "substantially similar" to copyrighted articles when prompted. That is a stronger legal argument than general training data complaints because it suggests the model can reproduce specific protected text on demand, not just learn patterns from it.
This case joins a rapidly growing pile of copyright litigation facing OpenAI - from news publishers to authors to now reference publishers. The legal theory being tested is whether reproducing near-identical content in model outputs crosses from training fair use into direct infringement. Courts haven't settled this yet, and each new case shapes the battlefield.
If you're building products that depend on LLM-generated content, the outcome of cases like this matters a lot for your legal exposure. Our AI copyright tag page tracks all the major cases.
🛡️ Sears Left Customer AI Conversations Exposed to the Open Web
Wired reported yesterday that Sears exposed customer conversations with its AI chatbot - including phone calls and text chats - to anyone with a browser. No login required. No authentication. Just open.
A Goldmine for Phishing Attackers
The exposed data included contact information and personal details shared during customer service interactions. Security researchers note that this kind of exposure is a direct enabler of targeted phishing attacks - an attacker who knows exactly what a customer called about, what product they bought, and their contact details can craft a highly convincing follow-up scam.
This story is a sharp reminder that deploying AI-powered customer service tools without proper access controls creates compounding risk. The chatbot itself may work perfectly - but if the conversation logs are exposed, you have handed attackers a detailed map of your customers' problems and identities.
Speaking of building fast without sacrificing security - if you're spinning up a web presence for a project, 60sec.site lets you build a professional AI-powered website in under a minute. Worth bookmarking.
🌎 Trivia Reveal
The answer is B - MoE activates only a subset of its parameters for each input token. In a standard dense model, every parameter fires for every request. In a Mixture-of-Experts model, a routing mechanism decides which 'expert' sub-networks are relevant for each token and activates only those. The result is a model that can have a very large total parameter count (like Mistral Small 4's 119B) while consuming far less compute per inference than a dense model of equivalent size.
💬 Quick Question
With Mistral Small 4 combining instruction following, reasoning, and vision into one model, I'm curious: are you running any open-weight models locally or through self-hosted APIs, or are you sticking entirely with commercial APIs like OpenAI and Anthropic? Hit reply and tell me your current setup - I read every response!
That's all for today. A lot of legal and regulatory heat hitting the AI industry right now alongside some genuinely exciting model releases - it's the kind of week that defines where things are heading.
Stay curious, and see you tomorrow. For more daily AI coverage, visit dailyinference.com.