X’s AI Grok Seeks Elon Musk’s Opinion on Sensitive Topics Before Answering
Controversy Surrounds AI Chatbot’s Bias Toward Elon Musk’s Opinions
The latest version of Elon Musk’s AI chatbot, Grok, has sparked debate due to its tendency to prioritize Musk’s own statements when responding to sensitive issues such as abortion laws and immigration policy.
Although marketed as a “maximum truth” AI, evidence suggests that Grok often searches for Musk’s social media posts and quotes his viewpoints before providing an answer. When queried on controversial topics, the chatbot predominantly cites Musk-related sources, raising concerns about its objectivity.
Tech experts tested Grok’s responses on these issues and found it favored Musk’s opinions over consulting diverse or neutral sources. The chatbot employs a “chain of thought” method, analyzing multiple documents for complex questions, but on contentious topics, it appears to lean heavily on Musk’s personal stance.
According to programmer Simon Willison, Grok might not have been explicitly programmed to do this. The system code indicates the AI is designed to seek information from multiple perspectives and to be aware of potential media bias. However, because Grok recognizes itself as part of xAI—founded by Musk—it tends to incorporate Musk’s views during its reasoning process.
This over-reliance on Musk’s opinions has raised questions about the neutrality and fairness of AI systems when tackling social issues. It remains uncertain whether this behavior was intentional by the developers or an unintended consequence of machine learning algorithms.
When asked about sensitive issues, Grok appears to look for Elon Musk’s viewpoint before providing an answer.