🤖 Daily Inference
AI regulation intensified dramatically yesterday as the EU opened formal investigations into Google's data practices, while new research reveals a troubling trend: a quarter of teenagers now turn to AI chatbots for mental health support. From Brussels to the environmental impact of AI infrastructure, today's developments highlight the growing tension between AI innovation and its societal consequences.
⚖️ EU Launches Major Investigation Into Google's AI Training Practices
The European Union opened a formal investigation yesterday into how Google uses online content to train its AI models, marking a significant escalation in regulatory scrutiny of big tech's data practices. The probe, launched by EU competition regulators, focuses specifically on whether Google's collection and use of online content for its Gemini AI models violates European data protection and competition laws.
The investigation comes as tech giants face mounting pressure over their approach to AI training data—a critical resource that has become increasingly contentious as publishers, artists, and content creators argue they should be compensated when their work is used to train commercial AI systems. Google's Gemini models, which compete directly with OpenAI's ChatGPT and Anthropic's Claude, require massive amounts of text, images, and other data to function effectively. The EU's probe will examine whether Google has adequately informed content creators about how their data is being used and whether the company's practices comply with the bloc's strict data protection framework.
This investigation represents a watershed moment for the AI industry. If regulators determine Google violated data protection rules, the company could face substantial fines and be forced to fundamentally change how it sources training data—a precedent that would ripple across the entire AI sector. For businesses building AI-powered tools, the message is clear: transparency about data sourcing is no longer optional. Speaking of AI tools, if you're looking to build an AI-powered website quickly, 60sec.site offers an AI website builder that can get you online fast while navigating these evolving regulatory landscapes.
🧠 Quarter of Teenagers Now Using AI Chatbots for Mental Health Support
A groundbreaking new study reveals that 25% of teenagers are now turning to AI chatbots for mental health support, with many describing these digital companions as "friends" they can confide in without judgment. The research, published yesterday, highlights a dramatic shift in how young people seek emotional support—and raises urgent questions about the psychological implications of forming attachments to artificial intelligence.
The study found that teens are using AI chatbots to discuss anxiety, depression, relationship problems, and other sensitive issues they feel uncomfortable sharing with parents, teachers, or even peers. Many respondents cited the 24/7 availability, non-judgmental responses, and perceived confidentiality as key reasons for preferring AI over human support. One striking finding: teens reported feeling their AI chatbot "understood" them in ways that adults in their lives did not. This phenomenon reflects both the sophistication of modern conversational AI and the growing isolation many young people experience in an increasingly digital world.
However, mental health experts are sounding alarms about the potential dangers. AI chatbots lack the training, ethical oversight, and genuine empathy that human therapists provide. They cannot recognize crisis situations reliably, may reinforce harmful thought patterns, and could delay teens from seeking proper professional help when they need it most. The research underscores an uncomfortable reality: as AI becomes more sophisticated and accessible, society is conducting a massive, largely unregulated experiment on adolescent mental health. The findings suggest an urgent need for guidelines around AI mental health applications and better education for both teens and parents about the limitations and risks of relying on artificial companions for emotional support.
⚠️ Foreign States Weaponize AI Videos in Disinformation Campaigns
UK Home Secretary Yvette Cooper revealed yesterday that foreign states are deploying sophisticated AI-generated videos to undermine public support for Ukraine, marking a dangerous evolution in state-sponsored disinformation campaigns. Cooper's warning highlights how rapidly improving AI video generation technology has become a tool for geopolitical manipulation, with hostile actors creating increasingly convincing fake footage to shape public opinion and sow discord.
The use of AI-generated video represents a significant escalation from earlier disinformation tactics that relied primarily on text posts and manipulated images. Modern AI video tools can create realistic-looking footage of events that never happened or words that were never spoken, making it exponentially harder for ordinary citizens to distinguish truth from fabrication. Cooper's statement indicates that these campaigns are specifically designed to erode Western support for Ukraine by creating false narratives about the conflict, potentially influencing public opinion and, by extension, government policy on military and humanitarian aid.
The revelation comes at a critical moment as AI video generation technology becomes more accessible and harder to detect. Unlike earlier deepfakes that often contained telltale artifacts, newer AI systems produce increasingly seamless results that can fool even trained observers. Cooper's public warning suggests UK intelligence agencies are tracking a coordinated campaign of sufficient scale and sophistication to warrant high-level government attention. For democracies worldwide, this represents a new frontier in information warfare—one where the evidence of our own eyes can no longer be trusted without verification, and where hostile actors can manufacture reality at scale to advance strategic objectives.
🌍 200+ Environmental Groups Demand Moratorium on New US Datacenters
More than 200 environmental organizations have called for a halt to new datacenter construction in the United States, citing the massive energy consumption and environmental impact of facilities that power AI systems. The coalition, which includes major climate advocacy groups, issued the demand in a letter highlighting how the AI boom is driving unprecedented datacenter expansion that threatens to derail climate goals and strain local power grids.
The groups' concerns center on the enormous energy requirements of AI training and inference. Large language models and other AI systems require vast computational resources, which in turn demand massive amounts of electricity—often equivalent to the power consumption of thousands of homes. As tech companies race to build more powerful AI systems, datacenter construction has accelerated dramatically, with many facilities being built in regions where the electrical grid relies heavily on fossil fuels. The environmental coalition argues that this expansion is happening without adequate environmental review or consideration of cumulative impacts on climate emissions, water resources (used for cooling), and local communities.
The moratorium call reflects growing tension between AI innovation and environmental sustainability. Tech companies have largely positioned AI as a tool that can help solve climate change through better modeling, optimization, and efficiency. Yet the infrastructure required to develop and deploy AI at scale is itself becoming a significant climate problem. The coalition is demanding that before any new datacenters are approved, developers must demonstrate they will run on 100% renewable energy, conduct comprehensive environmental impact assessments, and prove that local power grids can handle the additional load without increasing fossil fuel use. This showdown between AI ambitions and environmental realities is likely to intensify as models grow larger and more computationally demanding.
👥 European Youth Movement Mobilizes Against Tech Giant Dominance
A grassroots youth movement demanding digital justice is spreading rapidly across Europe, with young activists organizing to challenge the dominance of major tech platforms and advocate for stronger regulation of AI and social media companies. The movement, which has gained momentum in recent months, represents a generational shift in attitudes toward technology—moving from uncritical embrace to demands for accountability, transparency, and democratic control over digital systems that shape daily life.
The activists are calling for policies that would fundamentally reshape the relationship between citizens and tech giants. Their demands include stricter data privacy protections, mandatory algorithmic transparency, limits on AI surveillance, and breaking up monopolistic platforms. Unlike earlier tech criticism that focused primarily on content moderation or misinformation, this movement addresses the structural power of tech companies and their role in society. Organizers argue that a handful of corporations should not have unilateral control over critical digital infrastructure, especially as AI systems become more powerful and integrated into education, employment, healthcare, and governance.
What makes this movement particularly significant is its pan-European coordination and its explicit focus on youth perspectives. Young Europeans have grown up as digital natives, but many are now questioning the terms of that digital existence—particularly as they witness AI's expansion into mental health (as revealed in today's other research), education, and employment. The movement's message to policymakers is clear: "Don't pander to the tech giants." Instead, they're demanding that regulations prioritize human rights, democratic values, and social benefit over corporate interests and rapid technological deployment. As AI capabilities accelerate, this youth-led push for digital justice may represent the beginning of a broader societal reckoning with how much power we're willing to cede to algorithmic systems.
🚀 Poetiq AI Achieves Major Reasoning Benchmark Breakthrough
While regulatory concerns dominate headlines, AI capabilities continue advancing at a remarkable pace. Poetiq, a relatively unknown AI research company, has achieved a significant breakthrough on a major reasoning benchmark, demonstrating performance that challenges assumptions about which companies lead in AI development. The achievement highlights how the AI research landscape is becoming more distributed, with smaller players occasionally matching or exceeding the capabilities of well-funded tech giants.
The reasoning benchmark success is particularly noteworthy because reasoning remains one of the most challenging aspects of AI development. While large language models excel at pattern recognition and language generation, complex multi-step reasoning—the kind required for advanced problem-solving, mathematical proofs, and logical deduction—has proven more difficult to achieve reliably. Poetiq's breakthrough suggests that innovative approaches to model architecture or training methods may be yielding results that don't necessarily require the massive computational resources typically associated with frontier AI development. Details about their specific methodology remain limited, but the achievement demonstrates that the race to build more capable AI systems extends well beyond the usual suspects of OpenAI, Google, and Anthropic.
Today's developments paint a complex picture of AI's trajectory. Regulators are stepping up scrutiny, young people are forming unexpected relationships with AI systems, hostile actors are weaponizing the technology, environmentalists are sounding alarms about infrastructure impacts, youth movements are demanding democratic control—and through it all, the technology itself keeps advancing. The tension between innovation and governance has never been more apparent, and how society navigates these competing pressures will shape the AI landscape for years to come.
Stay informed on these rapidly evolving developments by visiting dailyinference.com for our daily AI newsletter.