AI Bots Relying on Wikipedia for Garbage Content
The Bias Within AI and Its Dependence on Wikipedia
Recent investigations reveal that major AI chatbots, including ChatGPT, Gemini, and Claude, are extensively trained on Wikipedia data. However, Wikipedia’s content has been criticized for its strong ideological bias, particularly leaning to the left, which can distort public perception of various issues.
Wikipedia maintains a list of sources deemed unreliable or biased, actively excluding conservative outlets such as Fox News, The Post, and others from citations. In contrast, it tends to accept outlets with a left-leaning reputation, like MSNBC, The Guardian, and Vox. This selective sourcing contributes to a skewed worldview that AI models inherit during training.
Consequently, AI-powered answers often reflect this bias, influencing billions of users who rely on these tools for information. When questioned about contentious topics like January 6 or gun control, AI responses have demonstrated noticeable leftist bias, which has been confirmed by independent research, including findings from the British Centre for Policy Studies.
This issue is compounded by the culture of tech development itself, dominated by liberals who design algorithms, further entrenching partisan perspectives. The adage “garbage in, garbage out” underscores the concern that biased sources result in perpetuated misinformation, creating a feedback loop that limits diverse viewpoints.
The situation echoes warnings from George Orwell about the dangers of censored or manipulated information. In response, President Donald Trump has announced an “action plan” urging AI companies to diversify and balance their data sources or risk losing government contracts. Transparency and voluntary reform by these companies are vital steps towards ensuring credible and impartial AI systems.