
In early 2024, Google’s AI tool, Gemini, caused controversy by generating pictures of racially diverse Nazis and other historical discrepancies. For many, the moment was a signal that AI was not going to be the ideologically neutral tool they’d hoped.
Gemini’s safety team made Nazi Germany more inclusive. (X)
Introduced to fix the very real problem of biased AI generating too many pictures of attractive white people — which are over-represented in training data — the over-correction highlighted how Google’s “trust and safety” team is pulling strings behind the scenes.
And while the guardrails have become a little less obvious since, Gemini and its major competitors ChatGPT and Claude still censor, filter and curate information along ideological lines.
Political bias in AI: What research reveals about large language models
A peer-reviewed study of 24 top large language models published in PLOS One in July 2024 found almost all of them are biased toward the left on most political orientation tests.
Interestingly, the base models were found to be politically neutral, and the bias only becomes apparent after the models have been through supervised fine-tuning.
This finding was backed up by a UK study in October of 28,000 AI responses that found “more than 80% of policy recommendations generated by LLMs for the EU and UK were coded as left of centre.”
AI models are big supporters of left-wing policies in the EU. (davidrozado.substack.com)
Response bias has the potential to affect voting tendencies. A pre-print study published in October (but conducted while Biden was still the nominee) by researchers from Berkley and the University of Chicago found that after registered voters interacted with Claude, Llama or ChatGPT about various political policies, there was a 3.9% shift in voting preferences toward Democrat nominees — even though the models had not been asked to persuade users……