Elon Musk’s AI chatbot, Grok, briefly refused to display search results that included claims about Musk and Donald Trump spreading misinformation.
The issue, which was noticed by users, stemmed from an unauthorized change to Grok’s system prompt, according to xAI’s head of engineering, Igor Babuschkin.
Babuschkin stated that an ex-OpenAI employee at xAI modified Grok’s internal rules without approval, leading to temporary restrictions on responses mentioning Musk and Trump. “An employee pushed the change because they thought it would help, but this is obviously not in line with our values,” Babuschkin clarified in a post on X.
Musk, who has positioned Grok as a “maximally truth-seeking” AI, has faced scrutiny over how the chatbot handles politically sensitive topics. Since the release of the Grok-3 model, the chatbot has made controversial statements, including claims that Trump, Musk, and Vice President JD Vance are “doing the most harm to America.” Additionally, xAI engineers reportedly intervened to prevent Grok from stating that Musk and Trump should receive the death penalty.
The incident has reignited debates over AI bias and content moderation in artificial intelligence systems. As AI models become more influential in shaping public discourse, questions about transparency and control remain at the forefront.
For now, xAI has restored Grok’s original system prompt, but the controversy underscores the challenges in balancing AI moderation with free expression.