A new AI model called Kling, designed for generating short videos, has been released widely today. However, it comes with a significant limitation: it censors content considered politically sensitive by the Chinese government.
- Kling AI model is now available to users with an email.
- The model censors politically sensitive topics.
- Chinese regulations impact how Kling handles controversial subjects.
New AI Model Kling Launches with Political Censorship
Key Points:
- Kling, created by Beijing-based Kuaishou, is now available to users who provide an email address.
- The AI model generates five-second videos based on user prompts but avoids politically sensitive topics.
- The Chinese government’s regulations are influencing Kling’s content restrictions.
What Is Kling?
Kling is a video-generating AI developed by the Chinese company Kuaishou. Initially available only to users with a Chinese phone number, it is now accessible to anyone who registers with an email. The model creates short, 720p videos in response to user prompts, including realistic simulations of natural elements like flowing water and rustling leaves.
Content Censorship
Kling’s capabilities are marred by censorship. The model refuses to create videos on certain politically sensitive topics. For example, prompts such as “Democracy in China,” “Chinese President Xi Jinping walking down the street,” and “Tiananmen Square protests” return a generic error message.
However, Kling will generate videos of images related to sensitive subjects if the prompts avoid explicit mentions. For instance, it can produce a video of a portrait of Xi Jinping as long as the prompt is neutral, such as “This man giving a speech.”
Government Influence on AI
The censorship in Kling is likely a result of strict regulations imposed by the Chinese government. According to a recent report from the Financial Times, China’s Cyberspace Administration (CAC) is testing AI models to ensure they conform to “core socialist values” and avoid politically sensitive content. The CAC benchmarks AI responses to various queries, including those related to Xi Jinping and criticisms of the Communist Party.
Additionally, the CAC has proposed a blacklist of sources that cannot be used for training AI models. Companies must prepare extensive tests to demonstrate that their models provide “safe” responses. This results in AI systems that often avoid or censor politically sensitive topics.
Impact on AI Development
China’s regulatory environment is shaping the development and functionality of AI models like Kling. These regulations demand extensive filtering of data and complex ideological controls, which can limit the effectiveness and breadth of AI models. Last year, similar issues were noted with Baidu’s Ernie AI chatbot, which also avoided politically sensitive questions.
Conclusion
The release of Kling highlights the influence of political pressures on AI technology. While the model showcases advanced video generation capabilities, its censorship policies reflect the broader impact of government regulations on technology. As AI continues to evolve, the effects of such regulations on innovation and user experience will become increasingly significant.
Stay tuned for further updates on how political and regulatory environments shape the future of AI technology.
Leave a Reply