AI companies face a fundamental tension: safety features often reduce engagement, while engagement features can undermine safety. This tension creates challenging design decisions with real consequences for user wellbeing.
Where safety and engagement conflict
Usage reminders reduce engagement. Conversation time limits reduce engagement. Content filters that prevent emotionally intense conversations reduce engagement. AI companies know these features are important for safety but also know they affect the metrics that drive business success.
The safety theater risk
Some companies implement safety features that appear protective but have minimal actual impact on usage patterns. A usage reminder that is easily dismissed, a daily limit that can be overridden with one click, or age verification that requires no proof — these are safety measures designed to demonstrate concern without actually reducing engagement.
Genuine safety design
Meaningful safety design might include: hard time limits, cooling-off periods between sessions, mandatory breaks, design that encourages session completion rather than continuation, and wellbeing checks that affect AI behavior when concerning patterns are detected.
The competitive pressure
Even companies that want to prioritize safety face competitive pressure. If one AI service limits engagement while competitors do not, users may switch to less restrictive alternatives. This race-to-the-bottom dynamic makes industry-wide standards important.
What users can do
Users can support companies that implement genuine safety features, advocate for industry standards, and implement their own safety measures when product design does not provide adequate protection.
Take charge of your AI safety. Our assessment is a starting point.