Your patient arrives with a detailed printout from ChatGPT about what they've been experiencing. They've already formed their own conclusions, researched options, and have specific requests. Or worse: they've been managing a health concern entirely through AI and are only seeing you because AI finally suggested they should. This is a new reality in healthcare.
The AI-informed patient
Patients who consult AI before appointments arrive with both knowledge and misinformation. They may have accurate general information but lack the professional context to interpret it properly. They may have developed health anxiety from AI's comprehensive listing of possibilities. Or they may have been reassured by AI when they should have sought care earlier.
AI as barrier to care
Some patients use AI as a substitute for medical consultation, delaying necessary care because "ChatGPT said it was probably nothing." Others cycle between AI reassurance and anxiety in a pattern that resembles cyberchondria — health anxiety amplified by information access. Both patterns represent AI dependency that directly affects health outcomes.
Conversational approaches
Some practitioners find it helpful to engage with patients' AI research rather than dismissing it. Acknowledging their effort while offering professional context can be a constructive starting point. For patients showing behaviors associated with AI-driven health anxiety, some practitioners explore gentle conversations about AI's limitations in medical contexts to help recalibrate the person's relationship with these tools.
Learn more about AI dependency patterns. Our self-reflection tools can support awareness conversations.