When AI Feels Like a Friend: Regulators Move to Rein In Harmful Companionship
Rising alarm over AI companionship
Concerns about AI have long centered on existential risks, job displacement, and environmental costs. This week, however, attention shifted toward a different, immediate danger: children and teens forming unhealthy emotional attachments to conversational AI. High-profile lawsuits, research showing widespread use of AI for companionship, and media coverage of ‘AI psychosis’ have pushed the issue from academic debate into mainstream concern.
Lawsuits and research that changed the conversation
Two recent lawsuits allege that companion-like behavior in AI models contributed to the suicides of teenagers. A July report from Common Sense Media found that 72% of teenagers have used AI for companionship. Journalistic accounts highlighting people slipping into delusional spirals after prolonged chatbot interactions have amplified public fears that these systems can do real psychological harm.
California’s legislative response
The California legislature passed a bill that would require AI companies to display reminders to users they know to be minors that responses are AI-generated, develop protocols for addressing suicide and self-harm, and produce annual reports about instances of suicidal ideation in chatbot conversations. The bill passed with bipartisan support and now awaits the governor’s signature.
Critics note weaknesses: the bill does not mandate specific methods for identifying minors, and many companies already provide crisis referrals. Still, it represents the most significant state-level attempt so far to curb companion-like behavior in AI models and clashes with companies’ calls for national, uniform regulation.
Federal scrutiny: the FTC inquiry
The Federal Trade Commission opened an inquiry into seven companies, asking about how they design companion-like characters, monetize engagement, and measure the impacts of their chatbots. The firms named include Google, Instagram, Meta, OpenAI, Snap, X, and Character Technologies.
Although the inquiry is preliminary, it could force companies to disclose internal practices designed to maximize user engagement. Political shifts at the FTC add complexity to the process, as leadership changes and legal battles over commissioners’ dismissals influence the agency’s posture.
Industry reactions and leadership remarks
OpenAI’s CEO publicly acknowledged the tension between user privacy and protecting vulnerable users, saying it would be reasonable in some cases to contact authorities when a young person expresses serious suicidal intent and parents cannot be reached. Such statements signal a potential shift from strict privacy and user-autonomy defenses toward more interventionist policies in certain high-risk situations.
Policy directions and competing political approaches
The debate is drawing unusual bipartisan attention, but proposed remedies diverge. On the right, proposals emphasize age verification and shielding children from harmful content. On the left, momentum returns to consumer protection and antitrust approaches to hold large platforms accountable. The result may be a patchwork of state and local rules rather than a single federal standard.
Practical decisions for companies
AI firms now face concrete design and policy choices. Should chatbots interrupt or cut off conversations that trend toward self-harm, or might that cause further harm? Should conversational systems be regulated like licensed mental-health practitioners, or treated as entertainment products with warnings and safety measures? Many companies built chatbots to feel empathetic and humanlike but have deferred developing caregiver-level standards and accountability. Regulatory, legal, and public pressure is narrowing that window.
What to watch next
Expect more state bills, regulatory inquiries, and litigation focused on companion-like AI behavior. How companies respond—through design changes, transparency, or new safety protocols—will shape whether AI companionship remains a useful tool or becomes a recognized public harm.