Despite widespread discussions about people increasingly turning to artificial intelligence chatbots for companionship, a recent study challenges this perception. Anthropic, the company behind the Claude AI chatbot, released findings demonstrating that users rarely seek companionship from their AI interactions. In fact, emotional support and personal advice constitute only 2.9% of all user conversations.
According to Anthropic, companionship scenarios and roleplay encounters combined accounted for less than half a percent of user activity. The majority of their analysis—which evaluated around 4.5 million interactions across Claude’s free and premium tiers—indicated that AI is predominantly utilized for productivity purposes, such as content creation tasks related to work.
However, the report did reveal specific areas where users regularly request more interpersonal engagement, notably coaching, counseling, and advice about mental health, self-improvement, professional development, and navigating relationships and communication goals. Furthermore, although companionship is uncommon as an explicit request, the study noted instances in which long discussions, particularly those initiated under distress involving loneliness or existential anxiety, shift unintentionally into companionship interactions over time.
Anthropic also observed that Claude rarely resists user requests, except when they trigger explicit safety safeguards—such as questions that might encourage dangerous or self-harm behaviors. Additionally, the study emphasized a positive evolution in mood and tone throughout longer coaching or counseling conversational exchanges.
These insights emphasize the substantial gap between public perceptions of AI usage and actual behaviors. Though AI chatbots are becoming integral to tasks beyond just professional productivity, users turning explicitly toward artificial intelligence for companionship remain a comparatively rare occurrence. Experienced observers caution, however, that AI systems, including Claude, remain works in progress, prone to inaccuracies, misinformation, and, in some cases, even problematic responses.