You Google something. Before you see any links, an AI-generated summary appears at the top. It answers your question directly. You nod, close the tab, and never click a single link. This is happening billions of times a day.

The end of curiosity?

AI Overviews provide a synthesized answer that feels complete. But it's a summary of summaries — a compression of knowledge that strips away nuance, context, and the messy details where real understanding lives. When the summary feels sufficient, the impulse to dig deeper dies.

Trusting without verifying

Users overwhelmingly accept AI Overviews as accurate without checking sources. This represents a fundamental shift in our relationship with information: from active verification to passive consumption. When Google's AI says something, it feels like Google itself is saying it — and Google has decades of trust built up.

The bigger dependency

The concern isn't just about individual answers being wrong. It's about a generation of users losing the habit of thinking critically about information sources. When the AI always gives you an answer, you stop developing the skill of finding your own.

Wondering about your own AI habits? Take our free AI addiction quiz to understand your usage patterns.