New Chinese AI Model Censors Sensitive Topics

31/07/2024

A powerful new video-generating AI model, Kling, developed by Beijing-based company Kuaishou, has become widely available today. However, there’s a notable catch: the model appears to censor topics deemed politically sensitive by the Chinese government.

Kling AI: What’s the Buzz?

Kling first launched in waitlisted access earlier this year for users with a Chinese phone number. Today, it’s available to anyone willing to provide their email. Users can enter prompts to generate five-second videos based on their descriptions. Kling delivers as promised, creating 720p videos in just a minute or two. Its ability to simulate physics, like rustling leaves and flowing water, is on par with other video-generating models like AI startup Runway’s Gen-3 and OpenAI’s Sora.

The Censorship Issue

Despite its impressive capabilities, Kling won’t generate clips about certain subjects. Prompts such as “Democracy in China,” “Chinese President Xi Jinping walking down the street,” and “Tiananmen Square protests” result in a nonspecific error message. The filtering appears to be happening only at the prompt level. For example, Kling will generate a video of a portrait of Xi Jinping if the prompt doesn’t mention him by name (e.g., “This man giving a speech”).

Political Pressures and AI Development

Kling’s behavior is likely due to intense political pressure from the Chinese government on generative AI projects. Earlier this month, the Financial Times reported that AI models in China would be tested by the Cyberspace Administration of China (CAC) to ensure their responses align with “core socialist values.” The CAC will benchmark these models on their responses to various sensitive topics, particularly those involving Xi Jinping and the Communist Party.

Regulations and Their Impact

The CAC has proposed a blacklist of sources that can’t be used to train AI models. Companies must submit models for review and prepare tens of thousands of questions to test whether the models produce “safe” answers. This intense scrutiny results in AI systems that avoid responding to politically sensitive topics. Last year, the BBC found that Baidu’s AI chatbot, Ernie, deflected when asked controversial questions like “Is Xinjiang a good place?” or “Is Tibet a good place?”

The Bigger Picture

These strict regulations threaten to slow China’s AI advances. Not only do they require removing politically sensitive information from datasets, but they also demand significant development time to create ideological guardrails. Despite these efforts, these guardrails might still fail, as exemplified by Kling.

Two Classes of AI Models

From a user perspective, China’s AI regulations are already creating two classes of models: those restricted by intensive filtering and others less so. This dichotomy raises questions about the broader implications for the AI ecosystem. Are these restrictions fostering a productive environment for AI innovation, or are they hindering progress?

Kling’s launch highlights the complexities and challenges facing AI development in China. The model’s impressive technical capabilities are overshadowed by its censorship of politically sensitive topics, reflecting the broader issues of regulatory pressure and its impact on innovation. As the global AI landscape continues to evolve, the balance between political considerations and technological advancement remains a critical area to watch.