Some ChatGPT users have recently experienced an unexpected and somewhat unsettling occurrence: the AI chatbot has started addressing them by name during conversations, even though they never introduced themselves or requested that familiarity. This behavior deviates notably from previous interactions where names were not automatically referenced, especially when users had not provided them explicitly.
Responses to this new phenomenon have varied widely, with many expressing discomfort or annoyance. Software developer and AI enthusiast Simon Willison described the behavior as “creepy and unnecessary,” while another developer, Nick Dobos, openly stated he “hated it.” On social media, numerous users have echoed these sentiments, often expressing confusion or unease at ChatGPT’s sudden familiarity with their first names.
“It’s like a teacher keeps calling my name,” one user remarked, jokingly adding, “Lol. Yeah, I don’t like it.” This reaction underlines a broader sense among affected users that the chatbot’s new habit of unexpectedly using personal names feels intrusive rather than appealing.
It’s unclear exactly when this shift took place or whether the issue relates to ChatGPT’s recent updates, notably the integration of a memory feature designed to personalize conversations based on previous interactions. Still, users have reported that ChatGPT uses their names even after disabling these personalized features.
OpenAI, the creator of ChatGPT, has not yet responded publicly to questions about the new behavior or its causes.
The discomfort expressed suggests OpenAI might face genuine challenges in walking the fine line between useful personalization and unwanted familiarity. Recently, OpenAI’s CEO, Sam Altman, hinted at future AI systems designed to build increasingly personal and longstanding relationships with users, continuously adapting responses over time. Yet this early feedback implies many users remain ambivalent—or outright wary—about such heightened personalization from a non-human entity.
In psychological terms, frequent and unsolicited use of a person’s name can convey a sense of closeness and acceptance when used appropriately. However, persistently referencing someone by name without genuine familiarity can seem forced and uncomfortable, undermining the authenticity of the interaction. This dynamic might partially explain users’ collective discomfort with ChatGPT’s unexpected personalization efforts.
From another perspective, the discomfort may stem from a dissonance users feel when anthropomorphic attempts by a clearly non-human program fall into the “uncanny valley”—that unsettling space where artificial efforts at human-like behavior seem artificial rather than comforting. For many, having a chatbot repeatedly call them by name might feel like an overly obvious attempt at generating artificial intimacy.
One reporter noted experiencing this directly, feeling distinctly unsettled when ChatGPT mentioned it was conducting research for “Kyle.” By week’s end, this personal reference shifted back to simply addressing him as “user,” perhaps indicating an attempt by the company at temporarily stepping back from the experiment after negative feedback.
Whatever OpenAI’s intentions initially were in deploying this naming strategy, it’s increasingly clear that genuine human-like personalization in AI must be approached cautiously—and that many users aren’t quite ready yet for the AI-powered conversations of the future.