Generative AI is emerging in an array of forms, but increasingly startups have resorted to marketing these tools as virtual co-workers, attributing human-like names and personas to them. The practice, known as anthropomorphizing, is strategically aimed at generating trust quickly and diminishing concerns that AI might replace human jobs. This marketing angle, however, is becoming problematic and is accelerating at a worrying pace.
Given the economic uncertainties that businesses face today, hiring new employees feels increasingly risky. Startups, particularly those coming out of accelerators like Y Combinator, are now positioning AI as a direct substitute for human roles, blatantly describing their tools as “AI employees” rather than simply software. AI assistants, coders, or workers are becoming common terminology specifically crafted to resonate with hiring managers under pressure.
Some businesses openly embrace this approach. One notable example is Atlog, which markets its software as an “AI employee for furniture stores,” claiming that its advanced capabilities enable a single skilled manager to effortlessly oversee twenty stores simultaneously. The unspoken implication—that the other nineteen managerial positions become obsolete—is conveniently omitted from the narrative.
Even consumer-facing AI companies employ similar methods. Anthropic, for instance, named its AI model “Claude,” giving it a reassuring, friendly persona to mask the cold reality of interacting with complex code. This tactic mirrors a strategy frequently seen in fintech, where apps adopt personable names like Dave, Albert, and Charlie to build trust with users managing their finances.
But this trend towards humanizing AI raises critical questions. Would users rather disclose personal data to a transparently artificial intelligence system or to a seemingly friendly and relatable “Claude,” who interacts warmly and consistently—and notably, appears far less intimidating than a generic algorithm?
Yet as appealing as anthropomorphized AI might sound, we’re reaching a critical juncture. With each new AI “employee” deployed into the workforce, the feeling of dehumanization grows more acute. As generative AI is no longer a novelty but a pervasive force, its economic implications have become more tangible. Unemployment claims in the United States reached their highest levels since 2021, recently affecting nearly two million individuals—many laid-off tech workers. The evidence of displacement is mounting.
Recent warnings underscore this tension. Dario Amodei, CEO of Anthropic, predicted recently that within five years, generative AI could render half of all entry-level white-collar jobs obsolete, potentially pushing unemployment rates well into double digits. Many workers remain unaware of this looming shift; the revelation seems implausible to some, but its potential consequences cannot be ignored.
Although referencing AI as a partner or colleague might seem clever at face value, it comes off as increasingly insensitive amid growing layoffs. History provides useful context here: IBM’s computers were never dubbed “digital colleagues,” nor were early PCs touted as personal assistants in the workplace. They were simply tools designed to enhance productivity and human capabilities—not replacements cloaked as companions.
Language indeed carries weight. AI should enable and empower workers, enhancing creativity, accuracy, and productivity—not simply replace them outright. The focus should return to developing and marketing tools explicitly designed to augment human potential, making people’s workplaces more insightful, effective, and innovative. It’s harsh and unnecessary to continue branding these digital utilities as faux-human workers. Rather than softening AI’s image through misleading personifications, companies would do better to transparently demonstrate how these tools genuinely aid the existing workforce.