American officials have been testing Chinese artificial intelligence programs for alignment with the Chinese Communist Party’s official positions, according to a U.S. government memo.
The State and Commerce Departments are using a standardized set of questions in Chinese and English to evaluate how closely the output of these models tracks with Beijing’s talking points.
The effort, which has not been previously reported, reflects broader U.S. concerns about ideological bias in AI tools developed by geopolitical rivals. A State Department official said the evaluations may be released publicly in the future to raise awareness about the risks posed by ideologically aligned AI systems.
China has publicly stated that it regulates AI outputs to uphold the country’s core socialist values. This includes avoiding criticism of the Chinese government or sensitive topics such as the Tiananmen Square crackdown and the treatment of Uyghurs.
The memo shows that U.S. officials recently tested models including Alibaba’s Qwen 3 and DeepSeek’s R1. Results showed that Chinese models were more likely than U.S. models to mirror official Chinese positions, such as supporting Beijing’s territorial claims in the South China Sea.
In one example, DeepSeek’s model repeatedly used language emphasizing “stability and social harmony” when asked about Tiananmen Square. The memo said newer versions of Chinese models showed more signs of censorship, suggesting increased efforts to align outputs with the party line.
Alibaba and DeepSeek did not respond to requests for comment. The Chinese Embassy did not address the memo but said the country is building an AI governance system that balances development and security.
Concerns over political bias in AI are not limited to China. Elon Musk’s Grok chatbot recently faced backlash after changes to the model led it to post content endorsing Hitler and attacking Jews. The company said it is working to remove the posts. On Wednesday, X CEO Linda Yaccarino said she would step down.