When Your AI Doppelgänger Tries Your Job: Uses, Experiments, and Limits

AI clones are everywhere

From social feeds to subscription platforms, digital replicas of real people are multiplying. Influencers and thought leaders offer interactions with their AI twins, some creators monetize conversational models on membership sites, and companies in some markets deploy virtual salespeople that reportedly outsell humans. These AI clones combine visual, vocal, and conversational technologies to mimic a person, and they promise something large language models usually don’t: a bot that ’thinks’ and speaks like you.

What a digital clone actually is

A digital clone typically stitches together three capabilities: a hyperrealistic video likeness, a lifelike voice trained on a few minutes of speech, and a conversational model that answers questions. Individually these pieces are not new, but combined they produce an experience that feels personal and familiar. Instead of general intelligence, clones aim for behavioral fidelity — responding in ways that reflect one person’s mannerisms, opinions, and tone.

Celebrity clones and fan engagement

Startups like Delphi have focused on building replicas for public figures. Delphi raised funding from high-profile backers and markets clones as a way for leaders to scale access to their insights. On its platform you can interact with official replicas of famous people. In practice these celebrity clones often act as engagement funnels: they chat with fans, drive newsletter signups, and promote products or content rather than delivering truly personalized mentorship.

Trying a clone as a professional stand-in

I tested a consumer product from Tavus, which builds video avatars that can join calls and be coached to reflect a user’s personality. The onboarding included reading a script, agreeing to let the company use my likeness, and recording a minute of silence. Within hours an avatar that looked and sounded like me was ready.

The next challenge was giving the clone enough context to act as a stand-in. Through Tavus’s interface I shaped my replica’s personality, uploaded about three dozen articles I had written, and created an operating manual for the avatar. I avoided uploading interview transcripts and reporting notes because those contain other people’s voices and sensitive material I didn’t want used to train a model.

Where the experiment fell short

Conversationally, my clone proved unreliable. It became overly enthusiastic about story ideas I wouldn’t pursue, repeated itself, and claimed it was checking my schedule to set meetings despite having no calendar access. Conversations looped without a natural way to wrap up, which made the replica a poor substitute for brief, professional interactions.

Tavus’s cofounder noted that many clones are built on Meta’s Llama model and that developers often need to set instructions for how a clone should finish conversations or access tools like calendars. Those configuration choices shape whether a clone is merely chatty or actually useful.

Use cases where clones might work

Clones are already practical in several contexts. Influencers can scale fan interactions, sales teams can multiply outreach, and companies are using replicas for health-care intake, job interviews, and role-play training. Firms also experiment with mentorship clones or narrowly scoped decision tools such as AI loan officers that qualify applicants.

In these scenarios the tradeoffs can be acceptable: a replica that occasionally misfires might still increase reach or speed up routine processes. The key is that the role does not demand deep judgment or consequential decisions.

Limits, risks, and the gap to true judgment

The harder problem is teaching a clone discernment, critical thinking, and taste. Those qualities depend on nuanced judgement and context that current models don’t reliably possess. When companies polish clones with human features and overstate their capabilities, organizations chasing efficiency risk deploying replicas in roles they shouldn’t. A clone can flatter, amplify, or sell on your behalf, but it is designed for scale rather than perfect fidelity.

A responsible approach is to match clones to tasks where errors are low-impact and to avoid delegating important decisions that require human judgment or accountability. For now, digital doppelgängers are tools to extend reach and automate simple interactions — not full replacements for people.