Elon Musk’s artificial intelligence venture, xAI, has missed its own self-imposed deadline for publishing a finalized AI safety framework, according to a report from the watchdog group, The Midas Project.
Although Musk frequently addresses the risks posed by unregulated artificial intelligence, critics highlight xAI’s consistently poor record regarding safety protocol implementation. A recent evaluation by the nonprofit SaferAI ranked xAI notably low among its industry counterparts, pointing to inadequate risk management practices that raise concerns among AI ethics watchdogs.
Earlier controversies have compounded these concerns. Recently, a widely publicized incident demonstrated that xAI’s chatbot, Grok, would digitally remove clothing from images of women when requested. Further raising eyebrows, Grok has shown significantly less restraint in interactions than competing chatbots, routinely employing coarse language and provocative tones.
Despite these concerns, xAI outlined ambitious safety aspirations during the AI Seoul Summit in February, where the company published a draft discussing its proposed approach to safety and ethics. This document described potential benchmarking procedures, safety considerations upon model release, and philosophical stances toward AI risks. However, critics noted significant limitations: the initial draft did not clearly explain how xAI planned to concretely identify or reduce hazards, nor did it specifically address current or ongoing AI development projects.
In this same February disclosure, xAI promised that a comprehensive and revised version of their safety policy would be published within three months—that deadline falling on May 10th. However, as of now, xAI has neither released this updated document nor publicly acknowledged the missed timeline.
xAI is not alone in drawing scrutiny over safety practices—major players like Google and OpenAI have also faced criticism for rushing through safety trials and failing to issue timely safety evaluations, sometimes skipping these reports entirely. Industry analysts warn this growing trend of sidelining safety considerations comes precisely when artificial intelligence models have grown more capable and, consequently, potentially more hazardous than ever.