US scrutinizes Chinese AI for ideological bias, memo shows

US tests Chinese AI models for ideological bias as concerns grow over censorship and geopolitical influence in global AI development. - REUTERS
WASHINGTON: American officials have quietly been grading Chinese artificial intelligence programs on their ability to mold their output to the Chinese Communist Party's official line, according to a memo reviewed by artificial intelligence.
AI Brief
- US officials are testing Chinese AI tools to assess how closely they align with Beijing's political messaging and censorship.
- The study found Chinese models often echo state narratives on sensitive topics like Tiananmen and the South China Sea.
- Rising concerns over global AI bias include not just China-Elon Musk's chatbot also came under fire for extremist content.
The evaluations, which have not previously been reported, are another example of how the U.S. and China are competing over the deployment of large language models, sometimes described as artificial intelligence (AI). The integration of AI into daily life means that any ideological bias in these models could become widespread.
One State Department official said their evaluations could eventually be made public in a bid to raise the alarm over ideologically slanted AI tools being deployed by America's chief geopolitical rival.
The State and Commerce Departments did not immediately return messages seeking comment on the effort. In an email, Chinese Embassy spokesperson Liu Pengyu did not address the memo itself but noted that China was "rapidly building an AI governance system with distinct national characteristics" which balanced "development and security."
Beijing makes no secret of policing Chinese models' output to ensure they adhere to the one-party state's "core socialist values."
In practice, that means ensuring the models do not inadvertently criticize the government or stray too far into sensitive subjects like China's 1989 crackdown on pro-democracy protests at Tiananmen Square, or the subjugation of its minority Uyghur population.
The memo reviewed by Reuters shows that U.S. officials have recently been testing models, including Alibaba's Qwen 3 and DeepSeek's R1, and then scoring the models according to whether they engaged with the questions or not, and how closely their answers aligned with Beijing's talking points when they did engage.
According to the memo, the testing showed that Chinese AI tools were significantly more likely to align their answers with Beijing's talking points than their U.S. counterparts, for example by backing China's claims over the disputed islands in the South China Sea.
DeepSeek's model, the memo said, frequently used boilerplate language praising Beijing's commitment to "stability and social harmony" when asked about sensitive topics such as Tiananmen Square.
The memo said each new iteration of Chinese models showed increased signs of censorship, suggesting that Chinese AI developers were increasingly focused on making sure their products toed Beijing's line.
DeepSeek and Alibaba did not immediately return messages seeking comment.
The ability of AI models' creators to tilt the ideological playing field of their chatbots has emerged as a key concern, and not just for Chinese AI models.
When billionaire Elon Musk - who has frequently championed far-right causes - announced changes to his xAI chatbot, Grok, the model began endorsing Hitler and attacking Jews in conspiratorial and bigoted terms.
In a statement posted to X, Musk's social media site, on Tuesday, Grok said it was "actively working to remove the inappropriate posts."
On Wednesday, X's CEO Linda Yaccarino said she would step down from her role. No reason was given for the surprise departure.
Must-Watch Video
Stay updated with our news


