🤖 Daily Inference
Sunday, December 7, 2025
The collision between AI innovation and accountability reached a fever pitch this weekend. The New York Times just fired its biggest legal salvo yet at an AI startup, deepfake technology is now weaponizing trusted doctors to spread dangerous health misinformation, and Anthropic quietly revealed they're using Claude in ways that could fundamentally reshape corporate hiring. Meanwhile, a music industry legend issued a stark warning about AI's unstoppable advance. Here's everything that matters.
⚖️ The New York Times vs. Perplexity: The Copyright War Escalates
The New York Times launched a major lawsuit yesterday against AI search startup Perplexity, accusing the company of 'illegal' copying of millions of articles. This marks the Times' most aggressive legal action since its lawsuit against OpenAI last year, signaling that major publishers are done negotiating and ready to fight.
The lawsuit alleges that Perplexity systematically scraped and reproduced Times content without permission or payment, using it to train AI models and generate responses that effectively replace the need to visit the original articles. Unlike traditional search engines that direct users to publisher websites, Perplexity's AI-powered search provides comprehensive answers that synthesize information from multiple sources—often making the click-through unnecessary. This represents a fundamental threat to the business model that sustains journalism: if users get the information without visiting the site, advertising revenue and subscriptions evaporate.
The timing is significant. As AI companies race to build more capable search and answer engines, they're increasingly colliding with content creators who see their work being monetized without compensation. The outcome of this lawsuit could establish critical precedents for how AI companies can legally use copyrighted content—and whether the existing framework of fair use extends to training datasets containing millions of articles. For the broader AI industry, this isn't just about one startup; it's about whether the foundational business model of AI-powered search can survive legal scrutiny.
⚠️ Deepfake Doctors: When AI Impersonates Medical Authority
A disturbing trend is emerging on social media: AI-generated deepfakes of real doctors spreading health misinformation. According to a new report, bad actors are creating convincing video deepfakes that feature the faces and voices of actual physicians delivering false or misleading medical advice—a development that threatens to erode trust in medical expertise precisely when accurate health information matters most.
The deepfakes exploit our hardwired tendency to trust medical professionals, using their credibility as a weapon. Real doctors are finding their likenesses hijacked to promote everything from unproven treatments to dangerous health conspiracies. The technology has become sophisticated enough that casual viewers can't easily distinguish fake videos from real ones—the lip sync is accurate, the voice cloning is convincing, and the visual quality matches authentic medical content. This isn't just identity theft; it's authority theft, where the professional standing of medical experts becomes a tool for spreading exactly the kind of misinformation they've spent careers fighting.
The implications for public health are severe. During health crises, misinformation can directly lead to preventable deaths. When that misinformation comes cloaked in the authority of a trusted physician's face and voice, its potential for harm multiplies. Social media platforms are struggling to detect and remove these deepfakes quickly enough, and the doctors themselves often only discover they've been impersonated after the videos have already spread widely. This arms race between deepfake creators and detection systems is one the platforms are currently losing—and the cost is measured in public trust and potentially in lives.
🏢 Anthropic's Bold Experiment: Claude Conducts Job Interviews
In a move that signals how quickly AI is moving from tool to autonomous agent, Anthropic revealed they're using Claude to conduct actual job interviews. Not just screening resumes or scheduling calls—actually interviewing candidates and evaluating their responses. This represents a significant evolution in how companies might approach hiring, and it's happening at one of the world's leading AI research labs.
Anthropic is positioning this as both a practical tool and a research experiment. Claude asks candidates questions, follows up based on their answers, probes for deeper understanding, and evaluates responses against role requirements. The AI can conduct these conversations at scale, ensuring every candidate gets asked the same baseline questions while still adapting dynamically to individual responses. For Anthropic, this serves dual purposes: improving their hiring process while generating real-world data about how AI performs in nuanced human interactions that require judgment, empathy, and fairness.
The implications extend far beyond Anthropic's own recruiting. If AI can reliably conduct interviews—traditionally one of the most human-centric parts of business—it raises profound questions about where AI assistants end and AI decision-makers begin. Will candidates accept AI interviewers? Can Claude genuinely assess soft skills, cultural fit, and the intangibles that often matter most? And critically, how do companies ensure AI interviewers don't perpetuate or amplify hiring biases? Anthropic is essentially beta-testing a future where your first interview might not be with a person at all—a future that's arriving faster than many organizations are prepared for.
Speaking of building for the future—if you're looking to establish your online presence quickly, 60sec.site uses AI to create professional websites in under a minute. While Claude's conducting interviews, you could have a landing page live. Check out more AI tools and insights at dailyinference.com.
🎵 Dave Stewart's Warning: AI Is Unstoppable, Musicians Must Adapt
Eurythmics co-founder Dave Stewart issued a stark message to the music industry yesterday: AI is an 'unstoppable force' that musicians must embrace rather than resist. Coming from a legendary artist and producer, this isn't a tech enthusiast's hype—it's a creative veteran's pragmatic assessment of where the industry is heading.
Stewart's argument isn't that AI will improve music or make artists obsolete—it's that AI is already here, already transforming how music is created and consumed, and fighting it is futile. He's urging musicians to engage with the technology, understand its capabilities, and figure out how to use it as a tool rather than treating it as an enemy. This perspective stands in sharp contrast to the many artists who've spoken out against AI-generated music, seeing it as a threat to their livelihoods and artistic integrity. Stewart's position: those concerns are valid, but the technology isn't going to stop developing just because artists object.
The music industry has faced technological disruption before—from recorded music replacing live performances as the primary revenue source, to digital downloads upending the album format, to streaming radically changing how artists get paid. Each time, the industry eventually adapted, though not without casualties. Stewart's message is essentially that AI represents the next wave in this pattern, and artists who learn to work with it will fare better than those who don't. Whether that's correct—and whether embracing AI means compromising artistic values—remains the critical debate as generative AI becomes capable of producing increasingly convincing music.
🔮 What This Week Reveals About AI's Direction
This weekend's developments paint a clear picture: AI is moving faster than our social, legal, and ethical frameworks can accommodate. Publishers are fighting for survival in court. Deepfakes are undermining trust in medical expertise. AI is moving from assistant to autonomous actor in corporate functions. And creative industries are grappling with whether to adapt or resist.
The common thread? We're past the point where AI's impact is theoretical. These are concrete problems demanding immediate solutions: legal precedents for training data, detection systems for harmful deepfakes, guidelines for AI decision-making in sensitive contexts like hiring, and frameworks that protect both innovation and human interests. The technology isn't waiting for us to figure this out. The question is whether our institutions can move fast enough to shape AI's deployment rather than just react to its consequences.
Stay informed with daily AI insights at dailyinference.com. The only way to navigate AI's rapid evolution is to understand what's changing—and why it matters.