China To Set AI Rules To Restrict AI Development

Unlike many Western governments, the Chinese government does not simply want to give free rein to the development of systems in the field of generative artificial intelligence. In the future, a state license is said to be necessary, with which China is basically trying to censor AI systems as well.

According to the report by Financial Times, citing sources close to the Chinese regulator, the Cyberspace Administration of China (CAC) wants to create a system whereby generative AI model vendors must first obtain a license before releasing their products. Previously, the CAC wanted to be a little more relaxed in dealing with AI models.

Authorities now want to give AI providers less leeway

Actually, in April, a first draft of the Chinese government’s guidelines for dealing with AI models initially said that providers must register their product within 10 days of publication. Now the agency, which is also responsible for extensive internet censorship in China, is turning the whole thing around.

Apparently, the concerns of the Chinese government authorities that overly cautious regulation of the new AI-supported offers from various Chinese companies could enable the distribution of “undesirable content” currently predominate. As early as April, the CAC made it clear in its first draft for the regulation of AI systems that one thing applies above all to artificial intelligence: the party is always right.

It said that the AI ​​content should “embody core socialist values” and not things that “undermine state power, promote the overthrow of the socialist system, incite division of the country or undermine national unity,” said the Financial Times report. In addition, the developers of AI models should ensure that the data used to train the language models is always checked for “correctness, accuracy, objectivity, and diversity”.

Prevent AI hallucinations

Of course, the Chinese government is primarily concerned with avoiding the “hallucinations” that occur with generative AI services, which have already caused a stir with Google Bard, ChatGPT, and Microsoft’s Bing chatbot. The AI ​​delivers wrong, insulting, or even racist answers that can cause more damage than are useful.

For this reason, too, the Chinese authorities made it clear that the providers of generative AI services must bear almost all responsibility for the content generated by the systems. Should one of the increasingly popular services in China cause “content” problems, this should also mean drastic effects for the large Chinese Internet companies through state repression.

Leave a Reply