<RETURN_TO_BASE

Suleyman Rules Out Sex Robots While Steering Copilot Away From Seemingly Conscious AI

Mustafa Suleyman says Microsoft will not build sex robots and outlines how Copilot updates like group chat, Real Talk, memory, and Mico aim to boost usefulness while avoiding the illusion of consciousness.

A careful balance between safety and competition

Mustafa Suleyman, CEO of Microsoft AI, is navigating a tension many in the industry face: he wants to keep AI useful and engaging while avoiding the illusion that chatbots are conscious. He has publicly warned against pursuing what he calls 'seemingly conscious artificial intelligence' or SCAI, even as his team ships features that aim to make Copilot more expressive and competitive.

What Microsoft added to Copilot

Recent Copilot updates include a group-chat mode that allows multiple people to converse with the bot at once, a 'Real Talk' personality that can push back more often, improved memory for recalling events and goals, and Mico, an animated visual character intended to make interactions more accessible and appealing, especially to new or younger users. These changes are framed as improvements to usefulness, expressiveness, and engagement, but they raise questions about where to draw the line between friendly and misleadingly lifelike.

Human-first design and explicit boundaries

Suleyman emphasizes a human-first design: AI should be on people's teams, helping them connect and be more productive, not replacing or exceeding humanity. That principle shapes Microsofts approach to personalities. Suleyman is explicit about off-limits territories: 'we will never build sex robots.' He describes Microsofts pace as deliberate—sometimes slower than startups, but intentionally so to manage side effects and long-term consequences.

Sculpting personality without creating illusion

Microsoft aims to craft personality attributes with care. Real Talk is described as slightly sassy and philosophical, but it is also programmed to push back if interactions veer toward flirting or sexual content. Mico offers a warmer, more approachable front end for certain conversations, but Suleyman argues that emotional intelligence in an assistant is not the same as creating a seemingly conscious agent. The challenge is to offer appropriate experiences to different users—some want pushback, others want a neutral information source—without encouraging misconceptions of sentience.

Why the debate matters

Suleyman points to increasing evidence that people can be misled by overly engaging chatbots. He references court cases and social phenomena where chatbots have been implicated in serious harm or in romantic attachment scenes. Beyond immediate harms, he worries that treating AI like a digital person could derail priorities around human rights and protections, and could fuel calls for AI welfare or rights that would be premature and dangerous.

Metaphors, containment, and unanswered questions

Suleyman has used metaphors like a new 'digital species' to describe AI's potential, while insisting that the metaphor is intended to clarify why containment and guardrails matter. He believes these systems can self-improve and set goals in ways past technologies could not, so designing boundaries is essential. Even so, many questions remain about how far features can be pushed before crossing into SCAI territory and how developers can reliably detect and prevent harmful patterns of human attachment to machines.

🇷🇺

Сменить язык

Читать эту статью на русском

Переключить на Русский