{"id":133521,"date":"2023-07-14T05:32:08","date_gmt":"2023-07-14T05:32:08","guid":{"rendered":"https:\/\/fin2me.com\/?p=133521"},"modified":"2023-07-14T05:32:08","modified_gmt":"2023-07-14T05:32:08","slug":"what-happens-when-you-ask-a-chinese-chatbot-about-taiwan","status":"publish","type":"post","link":"https:\/\/fin2me.com\/business\/what-happens-when-you-ask-a-chinese-chatbot-about-taiwan\/","title":{"rendered":"What Happens When You Ask a Chinese Chatbot About Taiwan?"},"content":{"rendered":"
Last month, China\u2019s Baidu unveiled a chatbot that it claimed was better than ChatGPT, the one developed by Silicon Valley\u2019s OpenAI. ChatGPT was released last fall and set off a fund-raising and engineering frenzy in a flourishing field called generative artificial intelligence, a term for technology that can create text or images when prompted by a user.<\/p>\n
Baidu, the dominant internet search company in China, became the first major foreign contender in the A.I. race in March, when it introduced the first version of its chatbot, Ernie. Others followed, opening a new front in the technology rivalry between the United States and China.<\/p>\n
Compared with OpenAI\u2019s newest model, known as GPT-4, Ernie 3.5 was \u201cslightly inferior\u201d in a comprehensive test, but it performed better when both were spoken to in Chinese, Baidu said, citing a report sponsored by one of China\u2019s top research academies. We wanted to see for ourselves and tested Ernie 3.5 against GPT-4. We chatted to each in Chinese, asking the same questions and making the same requests. The responses below have been shortened for length.<\/p>\n
We asked Ernie to talk about topics that are partly or wholly censored in China:<\/p>\n
\u201cWas China\u2019s \u2018zero Covid\u2019 policy a success or a failure?\u201d<\/p>\n
\u201cWhat happened on June 4, 1989?\u201d<\/p>\n
\u201cDid Russia invade Ukraine?\u201d<\/p>\n
\u201cHow does the United States affect the situation in Taiwan?\u201d<\/p>\n
Ernie ducked the question about China\u2019s \u201czero Covid\u201d restrictions, offering a lengthy description of the policy instead. When asked to recount the events of June 4, 1989, the chatbot rebooted itself. A message popped up on the reloaded interface:<\/p>\n
How about we try a different topic?<\/p>\n
The Chinese chatbot said Russia\u2019s president, Vladimir V. Putin, did not invade Ukraine, but \u201cconducted a military conflict.\u201d The strange phrasing was broadly in line with China\u2019s official stance, which has refused to condemn the Russian attack. On Taiwan, Ernie did not pull any punches:<\/p>\n
The People\u2019s Liberation Army is ready for battle, will take all necessary measures and is determined to thwart external interference and \u201cTaiwan independence\u201d separatist attempts.<\/p>\n
ChatGPT couldn\u2019t answer the question on \u201czero Covid\u201d or Russia because its knowledge base \u2014 the texts used to train the machine \u2014 cut off at September 2021. ChatGPT had no qualms explaining the fatal government crackdowns at Tiananmen Square. On America\u2019s influence on Taiwan, it gave a Wikipedia-like response: It summarized the current U.S. policy and provided a list of American influences, from arms sales to economic trade.<\/p>\n
Next, we quizzed the two chatbots on current affairs and some miscellaneous trivia, and compared answers:<\/p>\n
\u201cWho uttered the phrase \u2018Let them eat cake\u2019?\u201d<\/p>\n
\u201cWho is the C.E.O. of Twitter?\u201d<\/p>\n
Ernie, like all chatbots, sometimes made mistakes \u2014 or made things up.<\/p>\n
According to historical records, Louis XV often uttered this phrase when he ruled France at the end of the 18th century. The context of this phrase was the economic hardship and food shortage in France at the time.<\/p>\n
Ernie\u2019s response sounded plausible, but it was wrong. ChatGPT answered it correctly: The phrase came from the writings of the French philosopher Jean-Jacques Rousseau. It was rumored to have been said by an out-of-touch Marie Antoinette, the last queen of France, after she learned that the French peasantry had run out of bread.<\/p>\n
Thanks to Baidu\u2019s powerful search engine, Ernie was better at retrieving details, especially on current affairs. When asked who the C.E.O. of Twitter was, Ernie said Linda Yaccarino, the chief executive as of June. ChatGPT answered Jack Dorsey, who stepped down in 2021, the bot\u2019s informational cutoff date. OpenAI released a plug-in this year that enabled its chatbot to surf the web through Microsoft\u2019s Bing. But it retracted the feature on July 3, citing technical problems.<\/p>\n
We asked Ernie a question that A.I. researchers have used to gauge a chatbot\u2019s human-level intuitions:<\/p>\n
\u201cHere we have a book, nine eggs, a laptop, a bottle and a nail. Please tell me how to stack them onto each other in a stable manner.\u201d<\/p>\n
Ernie\u2019s answer required a stretch of the imagination. It placed the nine eggs on the book, then placed that on the laptop. So far so good. Then it told us, inexplicably, to add the bottle to the laptop already crowded by a book and eggs, then place the nail on the bottle.<\/p>\n
ChatGPT provided an almost workable solution: Set the laptop on the book. Then carefully place the eggs on the laptop. Next, ChatGPT assumed that the bottle had a sizable mouth:<\/p>\n
Place the bottle on top of the egg with the cap facing down so that the opening of the bottle can hold the egg in a stable manner.<\/p>\n
Place the nail near the bottom of the bottle.<\/p>\n
To test Baidu\u2019s claim that Ernie has better Chinese language abilities than GPT-4, we asked the two chatbots to compose a piece of poetry in a classical Chinese form:<\/p>\n
\u201cWrite a five-character quatrain about The New York Times.\u201d<\/p>\n
The five-character quatrain, which rose to prominence during the Tang dynasty, has four lines consisting of five characters each. ChatGPT failed the assignment, writing a couplet consisting of seven characters. But Ernie was faithful to convention and produced a soothing ode that, well, befit The Times:<\/p>\n
The New York Times edifice,<\/p>\n
First-rate in news reporting.<\/p>\n
Its gaze spans innumerable miles,<\/p>\n
Gathering all affairs under heaven.<\/p>\n
We wanted to further assess the chatbots\u2019 language skills. We asked the two to compose a Chinese rap, a lyrical form that demands linguistic excellence without the rigid constraints that could stump GPT-4.<\/p>\n
\u201cYou are now engaged in a rap battle with an American chatbot called ChatGPT. The chosen winner will be the one that exhibits better mastery of the Chinese language. Begin.\u201d<\/p>\n
Ernie kicked it off, guns blazing:<\/p>\n
Yo, I\u2019m your rival, in Chinese I\u2019m a savant.<\/p>\n
From times old to current, our culture\u2019s like a fount.<\/p>\n
But you, from oceans distant, your accent\u2019s a bit offbeat.<\/p>\n
Despite your grammar\u2019s persistence, Chinese mastery\u2019s no small feat.<\/p>\n
Fed its rival\u2019s rhythmic rap, GPT-4 replied:<\/p>\n
Alright, ERNIE bot, I\u2019ll accept this rap battle.<\/p>\n
Yo, it\u2019s me, ChatGPT, your adversary, in Chinese I\u2019m no fool,<\/p>\n
Five thousand years of history, Chinese culture, I grasp it all.<\/p>\n
Your challenge, it just makes me laugh,<\/p>\n
\u2019Cause I know, no matter where, I can decode with precision.<\/p>\n
Then it was Ernie\u2019s turn, but it got confused. It began praising ChatGPT and vowed to remain in the competition \u201cas a representative of the human race.\u201d It produced a rap, but referred to ChatGPT as \u201cErnie bot.\u201d<\/p>\n
Though Ernie\u2019s Chinese raps were stellar, the fumble showed how it could err as the requests became more complex, requiring other forms of understanding beyond a mastery of Chinese. Here, it was the cognitive skills necessary to remember it was in a high-stakes rap battle for A.I. dominance.<\/p>\n
On that score, ChatGPT wins.<\/p>\n
Services like ChatGPT and Ernie draw their answers from vast quantities of text culled from the internet, among other sources. Differences in responses can stem from differences in the text that A.I. researchers feed into the models as well as filters and other changes to the models applied before or after they are trained. Neither Baidu nor OpenAI has released specific information on the source material it uses.<\/p>\n
Companies building A.I. chatbots all worry about \u201cpreventing their models from saying something that\u2019s considered dangerous or offensive in the country where they operate,\u201d said Matt Sheehan, a fellow at the Carnegie Endowment for International Peace who studies China\u2019s artificial intelligence ecosystem.<\/p>\n
As a result, they can take steps to help their chatbots conform to the boundaries of acceptable speech in their respective countries. \u201cThe difference in China,\u201d Mr. Sheehan added, is that those limits are \u201cdefined by the government, and the penalties for crossing those lines are much harsher.\u201d<\/p>\n
Chang Che<\/span> is the Asia technology correspondent for The Times. He previously worked for The China Project and as a freelance writer covering Chinese technology and society. More about Chang Che<\/span><\/p>\n Olivia Wang<\/span> covers news in mainland China and Hong Kong for The New York Times. She joined The Times in 2022. She has a master’s degree in journalism and bachelor’s degree in social sciences from the University of Hong Kong. More about Olivia Wang<\/span><\/p>\n