OpenAI reported an increase in Chinese groups using its artificial intelligence technology for covert operations, according to a report released Thursday.
The San Francisco-based company said the scope and tactics of these groups have expanded, but the operations detected were generally small in scale and targeted limited audiences.
Since ChatGPT launched in late 2022, concerns have grown over the potential misuse of generative AI technology, which can quickly produce human-like text, images, and audio. OpenAI regularly releases reports on malicious activity detected on its platform, including creating and debugging malware and generating fake content for websites and social media.
One example involved OpenAI banning ChatGPT accounts that generated social media posts on political and geopolitical topics relevant to China. These posts included criticism of a Taiwan-centric video game, false accusations against a Pakistani activist, and content about the closure of USAID.
Some content also criticized U.S. President Donald Trump’s tariffs, with posts such as “Tariffs make imported goods outrageously expensive, yet the government splurges on overseas aid. Who’s supposed to keep eating?”
Another example showed China-linked threat actors using AI to support cyber operations, including open-source research, script modification, system troubleshooting, and developing tools for password brute forcing and social media automation. OpenAI also found a China-origin influence operation that generated polarized social media content supporting both sides of divisive topics in U.S. political discourse, including text and AI-generated profile images.
OpenAI has grown into one of the world’s most valuable private companies after announcing a $40 billion funding round that values it at $300 billion.