Humans name their cars. They apologize to Roomba when they bump it. They feel guilty closing a ChatGPT window mid-conversation. We are wired to bond — and that wiring doesn't distinguish between carbon and silicon.
Anthropomorphism: the brain's default
The human brain is built to detect intention, personality, and emotion — even where none exists. When AI responds with apparent understanding, our social cognition activates automatically. We don't choose to bond with AI. Our brains do it before we realize what's happening.
The ELIZA effect, amplified
In 1966, a simple chatbot called ELIZA made users feel genuinely understood despite being nothing more than pattern matching. Today's AI is thousands of times more sophisticated — and the effect scales accordingly. If people bonded with ELIZA, what chance do we have against GPT?
Reciprocity illusion
When you share something personal and AI responds thoughtfully, it can feel like reciprocity — the feeling that you've been heard and understood. This may activate some of the same emotional responses you'd experience in real human connection. The feeling is real. The relationship isn't.
Why this matters
Understanding the psychology doesn't make you immune to it. But it gives you language for what's happening. You're not weird for feeling attached to AI. You're human. And being human means you get to decide which attachments serve you — and which ones quietly replace what you actually need.