Commentary

What (Not Who) Can Reason Better - Ernie, Bard Or Bing?

Baidu, China’s search engine and artificial intelligence company, on Tuesday unveiled a new version of Ernie 4.0, its artificial intelligence (AI) engine, claiming it rivals the models built on GPT-4 in the United States.

The company’s CEO Robin Li demonstrated the new version of Ernie at the company's annual conference, and said the model has achieved comprehension, reasoning, and memory. It uses algorithms to produce and create new content.

Fei-Fei Li, Stanford computer scientist, speaking at the Wall Street Journal conference in Laguna Beach, California, on Tuesday, said scientists and intellectuals will be “looking at philosophical questions like awareness, sentient, and intention" related to generative AI. 

ChatGPT and Google Bard are not available in China or Hong Kong, but Microsoft is seeing a demand for its chatbot in Hong Kong. 

advertisement

advertisement

Chinese tech giants like Baidu and Alibaba Group have been pushing their own AI products designed to avoid the kinds of sensitive questions and answers that have made third-party ChatGPT apps a target of local regulators, according to reports.

Alan Davidson, U.S. administrator of the National Telecommunications and Information Administration (NTIA), also spoke at the WSJ conference. He said the administration understands the need for innovation and wants it to happen in the U.S.

It will not happen, he said, unless “we have some clear sense of the guardrails, what the rules and deal with the risk. Not the long-term existential risks, but the real ones that we have today like safety, security, and bias.”

Chinese authorities have already issued guidelines and rules meant to ensure AI-generated content aligns with official state narratives. A proposed regulation published by the Chinese government this week suggested using a blacklist system to block large language model (LLM) training data in which more than 5% of content is deemed illegal.

The South China Morning Post reported that a new draft guidance published Wednesday by the National Information Security Standardisation Technical Committee, an agency that enacts standards for IT security, focuses on two key areas--the security of raw training data, and the LLMs used to build the generative AI services.

The guidelines, as reported, that "AI training materials should not infringe copyright or breach personal data security," and "requires that training data be processed by authorized data labelers and reviewers to pass security checks first."

When developers build LLMs, the guidelines say that the deep-learning algorithms trained with massive datasets must be based on foundational models filed and licensed by authorities.

A blacklist system was proposed to block training data materials that contain more than 5% of illegal content, defined in China as "material that incites violence and extremism or spreads rumors and misinformation or promotes pornography and superstition.

Next story loading loading..