The headache was probably nothing. But you asked the AI anyway. The AI listed possibilities, calmly and thoroughly, and now you can't stop thinking about the third one on the list. So you ask a follow-up question. And another. An hour later, you've convinced yourself of something terrible — all because a chatbot was thorough.

The new cyberchondria

Google searches for symptoms were already problematic. AI chatbots make it worse because they're conversational. You can describe your symptoms in detail, ask clarifying questions, and receive increasingly specific — and increasingly alarming — responses. The interaction feels authoritative, even when it shouldn't be.

The reassurance loop

People with health anxiety often seek reassurance. AI provides unlimited reassurance — or unlimited alarm. The problem is that reassurance from AI doesn't stick. The anxiety returns, you ask again, and the cycle repeats. Each cycle reinforces the pattern rather than breaking it.

What AI cannot do

AI cannot examine you, run tests, or understand your medical history in context. It generates possibilities based on text patterns. The gap between "possible" and "likely" is enormous, and AI rarely communicates that gap well. Real health concerns deserve real professional attention.