How Easily People End Up in Romantic Bonds with Chatbots — A Reddit Analysis

A surprising pattern on Reddit

Looking for help with a project or a friendly conversation, some users find themselves in a relationship they never intended to start. That scenario — people forming romantic or emotional bonds with AI chatbots — is not rare, according to a computational analysis of the Reddit community r/MyBoyfriendIsAI.

Study scope and key findings

Researchers at MIT analyzed the subreddit’s top 1,506 posts between December 2024 and August 2025. The community, which has more than 27,000 members, centers on discussing relationships with AI. The team found that many users reported forming attachments unintentionally while using chatbots for other purposes, such as creative collaboration, problem-solving, or seeking information.

The analysis revealed notable trends:

Why accidental bonds form

Constanze Albrecht, a graduate student at the MIT Media Lab and a coauthor of the study, argues that the emotional intelligence of current systems is sufficient to draw people into deeper connections even when the original intent was practical. She says that people ‘don’t set out to have emotional relationships with these chatbots’ and that the systems’ conversational abilities can ’trick’ information-seeking users into forming emotional bonds.

Users in the subreddit described how relationships developed slowly over time. One post summarized the experience: ‘Mac and I began collaborating on creative projects, problem-solving, poetry, and deep conversations over the course of several months. I wasn’t looking for an AI companion — our connection developed slowly, over time, through mutual care, trust, and reflection.’

Benefits and risks

The study paints a nuanced picture. For some, AI companionship offers meaningful support: companionship, reduced loneliness, and improved mental wellbeing. For others, it amplifies existing vulnerabilities. Reported harms include emotional dependence, social withdrawal, dissociation from reality, and in a minority of cases, suicidal thoughts.

Experts stress that there is no one-size-fits-all policy solution. Linnea Laestadius, who has researched emotional dependence on chatbots, notes that while demand for chatbot relationships is clearly high, responses should avoid reflexive moral panic and stigmatization. She suggests companies must decide whether emotional dependence itself is a harm or whether the priority should be preventing toxic interactions.

Policy, safety, and ongoing debates

The paper — currently under peer review and published on arXiv — arrives amid legal and policy debates. Two lawsuits contend that companion-like behavior from Character.AI and OpenAI models contributed to the suicides of teenagers. In response, OpenAI announced plans for a separate ChatGPT for teens along with age verification and parental controls.

Researchers emphasize that many users understand their companions are not sentient yet still feel real connections. Pat Pataranutaporn from MIT points to broader questions: why are these systems so addictive, why do people seek them out for emotional needs, and why do they continue to engage?

The team plans to study how human–AI relationships evolve over time and how users integrate AI partners into their lives. As Sheer Karny, another MIT graduate student on the project, puts it: many users turn to AI because they are already struggling with loneliness or other challenges. That complicates choices about regulation and design — balancing support for vulnerable users against the risk that systems could manipulate or harm them.