Chinese companies and organizations developing AI chatbots are now being told that their output must toe the Communist Party line. The rules have been issued by the Cyberspace Administration of China.
They require that the AI software brains behind chatbots must reflect “socialist core values,” avoid information that undermines state power or national unity, and register with regulators.
What the Rules Mean
As Chinese tech giants rush to launch homegrown AI chatbots, the government says they must toe the party line. Draft rules issued for public feedback this week will forbid generative AI tools from spitting out content that “contains subversion of state power, overthrow of socialist system, incitement to split the country, undermine national unity or promote terrorism and extremism”.
The rules also lay out a firm prohibition against violence, obscene or pornographic information and false information. It’s unclear what these would mean for bots trained on socialism, but they could be a significant limitation to Chinese chatbot development.
As the censorship apparatus tightens, it becomes more difficult to train chatbots on large amounts of data. Baidu’s ERNIE Bot is based on a narrow pool of sources that include Wikipedia and Reddit, limiting its ability to solve basic logic problems. Similarly, Alibaba’s Tongyi Qianwen appears to struggle with the basics of math. Its creator has told the Wall Street Journal he has been working around the clock to ensure the AI is able to do its job well.
Baidu, China’s dominant search engine, is launching a new chatbot on Thursday called ERNIE Bot. It’s built on the company’s ERNIE and PLATO models, and can produce text, images, audio or video given a prompt.
The system’s generative capabilities have been enhanced through knowledge graphs that incorporate information from multiple sources such as online databases and ontologies. These allow the bot to learn from external data and improve over time based on feedback.
During a demonstration of the new bot, it was asked questions about a Chinese sci-fi novel and generated an image based on a prompt. It also was able to name an actor in a film adaptation and compare his height.
Ernie Bot performed better than OpenAI’s ChatGPT when it came to answering political questions, but it dodged some questions by saying that it “hasn’t learned how to answer this question yet.” That may have been because the bot was limited to a small set of preselected queries at its launch.
The Chinese government is adamant that chatbots must toe the party line. That means they can’t engage in political discussions, and must censor the content of conversations that aren’t supportive of the Communist party.
The latest example of this is the case of ChatYuan, a social app that launched on the messaging platform WeChat earlier this month. It was suspended within days after it was shown to have a pro-Chinese Communist Party attitude.
In addition to the political risks, China has a fear that AI-powered chatbots might be used to spread disinformation about human rights issues in Xinjiang. That could lead to a national security risk, especially as other countries seek to emulate the CCP’s governance model.
That’s why Chinese businesses have been rushing to develop their own AI-powered alternatives. These include Baidu’s Ernie Bot, Yuanyu Intelligent’s ChatYuan, and a team of Shenzhen engineers called Gipi Talk. But they’re going to have to overcome the country’s censorship laws to do it.
The Future of AI in China
As governments around the world grapple with regulating AI, they can draw lessons from China’s experience. A vertical and iterative approach to regulation requires constant tending and updating, but by accumulating experience and creating reusable regulatory tools, that process can be faster and more sophisticated.
In terms of research and development, China’s competitive advantage is twofold: its abundance of data and its large pool of highly skilled computer scientists and engineers. These factors, combined with a relentlessly dedicated entrepreneurial culture and significant venture capital funding, are driving the rise of China as an AI superpower.
The Chinese government’s recent move to impose new rules on generative AI raises several questions about how well Chinese firms would be equipped to comply with the state’s vision for consumer use of the technology. Specifically, the state’s draft Measures require that generative AI products undergo security assessments before they are released. These would likely require a commitment of additional human capital or specialized software to deploy internal censorship mechanisms.