Upload a thirty-second audio clip, and ElevenLabs can clone that voice. Then you can make it say anything — in any language, in any emotional tone. The technology is remarkable, and for some users, the fascination never wears off.

The voice experimentation loop

Users describe falling into experimentation loops: cloning their own voice, testing different emotions, trying other voices, generating audiobooks, creating podcasts that never get published. The generation itself becomes the activity, not the output.

Emotional uncanny valley

AI-generated speech is reaching the point where it triggers genuine emotional responses. Hearing a cloned voice of a loved one say things they never said creates a strange, compelling experience that some users find difficult to stop exploring — particularly those dealing with grief or loss.

Where the line blurs

When you can make anyone's voice say anything, the ethical and psychological implications extend beyond personal use. But even setting aside the broader concerns, the personal pattern is worth examining: what need is being met by endlessly generating voices?

Wondering about your own AI habits? Take our free AI addiction quiz to understand your usage patterns.