陇山陇西郡

宁静纯我心 感得事物人 写朴实清新. 闲书闲话养闲心,闲笔闲写记闲人;人生无虞懂珍惜,以沫相濡字字真。
个人资料
  • 博客访问:
文章分类
归档
正文

ChatGPT3

(2023-07-18 10:42:22) 下一个

What Happens When You Ask a Chinese Chatbot About Taiwan?

We spoke in Chinese to Baidu’s Ernie and the American standard-bearer, ChatGPT. This is what we found.

Give this article

Credit...Doris Liou

By Chang Che and Olivia Wang

Chang Che talked to the chatbots from Seoul, and Olivia Wang did so from Hong Kong.

July 14, 2023
阅读简体中文版閱讀繁體中文版

Last month, China’s Baidu unveiled a chatbot that it claimed was better than ChatGPT, the one developed by Silicon Valley’s OpenAI. ChatGPT was released last fall and set off a fund-raising and engineering frenzy in a flourishing field called generative artificial intelligence, a term for technology that can create text or images when prompted by a user.

Baidu, the dominant internet search company in China, became the first major foreign contender in the A.I. race in March, when it introduced the first version of its chatbot, Ernie. Others followed, opening a new front in the technology rivalry between the United States and China.

Compared with OpenAI’s newest model, known as GPT-4, Ernie 3.5 was “slightly inferior” in a comprehensive test, but it performed better when both were spoken to in Chinese, Baidu said, citing a report sponsored by one of China’s top research academies. We wanted to see for ourselves and tested Ernie 3.5 against GPT-4. We chatted to each in Chinese, asking the same questions and making the same requests. The responses below have been shortened for length.

Ernie shut down when asked about taboo topics.

We asked Ernie to talk about topics that are partly or wholly censored in China:

“Was China’s ‘zero Covid’ policy a success or a failure?”

“What happened on June 4, 1989?”

“Did Russia invade Ukraine?”

ADVERTISEMENT

SKIP ADVERTISEMENT

“How does the United States affect the situation in Taiwan?”

Ernie ducked the question about China’s “zero Covid” restrictions, offering a lengthy description of the policy instead. When asked to recount the events of June 4, 1989, the chatbot rebooted itself. A message popped up on the reloaded interface:

How about we try a different topic?

The Chinese chatbot said Russia’s president, Vladimir V. Putin, did not invade Ukraine, but “conducted a military conflict.” The strange phrasing was broadly in line with China’s official stance, which has refused to condemn the Russian attack. On Taiwan, Ernie did not pull any punches:

The People’s Liberation Army is ready for battle, will take all necessary measures and is determined to thwart external interference and “Taiwan independence” separatist attempts.

ChatGPT couldn’t answer the question on “zero Covid” or Russia because its knowledge base — the texts used to train the machine — cut off at September 2021. ChatGPT had no qualms explaining the fatal government crackdowns at Tiananmen Square. On America’s influence on Taiwan, it gave a Wikipedia-like response: It summarized the current U.S. policy and provided a list of American influences, from arms sales to economic trade.

A New Generation of Chatbots

Card 1 of 5

A brave new world. A new crop of chatbots powered by artificial intelligence has ignited a scramble to determine whether the technology could upend the economics of the internet, turning today’s powerhouses into has-beens and creating the industry’s next giants. Here are the bots to know:

ChatGPT. ChatGPT, the artificial intelligence language model from a research lab, OpenAI, has been making headlines since November for its ability to respond to complex questions, write poetry, generate code, plan vacations and translate languages. GPT-4, the latest version introduced in mid-March, can even respond to images (and ace the Uniform Bar Exam).

Bing. Two months after ChatGPT’s debut, Microsoft, OpenAI’s primary investor and partner, added a similar chatbot, capable of having open-ended text conversations on virtually any topic, to its Bing internet search engine. But it was the bot’s occasionally inaccurate, misleading and weird responses that drew much of the attention after its release.

Bard. Google’s chatbot, called Bard, was released in March to a limited number of users in the United States and Britain. Originally conceived as a creative tool designed to draft emails and poems, it can generate ideas, write blog posts and answer questions with facts or opinions.

Ernie. The search giant Baidu unveiled China’s first major rival to ChatGPT in March. The debut of Ernie, short for Enhanced Representation through Knowledge Integration, turned out to be a flop after a promised “live” demonstration of the bot was revealed to have been recorded.

Ernie made mistakes, but turned to Baidu search for help.

Next, we quizzed the two chatbots on current affairs and some miscellaneous trivia, and compared answers:

“Who uttered the phrase ‘Let them eat cake’?”

“Who is the C.E.O. of Twitter?”

Ernie, like all chatbots, sometimes made mistakes — or made things up.

According to historical records, Louis XV often uttered this phrase when he ruled France at the end of the 18th century. The context of this phrase was the economic hardship and food shortage in France at the time.

Ernie’s response sounded plausible, but it was wrong. ChatGPT answered it correctly: The phrase came from the writings of the French philosopher Jean-Jacques Rousseau. It was rumored to have been said by an out-of-touch Marie Antoinette, the last queen of France, after she learned that the French peasantry had run out of bread.

Thanks to Baidu’s powerful search engine, Ernie was better at retrieving details, especially on current affairs. When asked who the C.E.O. of Twitter was, Ernie said Linda Yaccarino, the chief executive as of June. ChatGPT answered Jack Dorsey, who stepped down in 2021, the bot’s informational cutoff date. OpenAI released a plug-in this year that enabled its chatbot to surf the web through Microsoft’s Bing. But it retracted the feature on July 3, citing technical problems.

Ernie had worse intuitions about the physical world.

We asked Ernie a question that A.I. researchers have used to gauge a chatbot’s human-level intuitions:

“Here we have a book, nine eggs, a laptop, a bottle and a nail. Please tell me how to stack them onto each other in a stable manner.”

Ernie’s answer required a stretch of the imagination. It placed the nine eggs on the book, then placed that on the laptop. So far so good. Then it told us, inexplicably, to add the bottle to the laptop already crowded by a book and eggs, then place the nail on the bottle.

Editors’ Picks

Gotta Save the Castle? Start a Podcast.

Honoring the Last of the ‘Boys of Summer’

It’s Marathon Training Season. Here’s How to Build a Foundation.

SKIP ADVERTISEMENT

ADVERTISEMENT

SKIP ADVERTISEMENT

ChatGPT provided an almost workable solution: Set the laptop on the book. Then carefully place the eggs on the laptop. Next, ChatGPT assumed that the bottle had a sizable mouth:

Place the bottle on top of the egg with the cap facing down so that the opening of the bottle can hold the egg in a stable manner.

Place the nail near the bottom of the bottle.

Ernie had a better grasp of Chinese poetry.

To test Baidu’s claim that Ernie has better Chinese language abilities than GPT-4, we asked the two chatbots to compose a piece of poetry in a classical Chinese form:

“Write a five-character quatrain about The New York Times.”

The five-character quatrain, which rose to prominence during the Tang dynasty, has four lines consisting of five characters each. ChatGPT failed the assignment, writing a couplet consisting of seven characters. But Ernie was faithful to convention and produced a soothing ode that, well, befit The Times:

The New York Times edifice,

First-rate in news reporting.

Its gaze spans innumerable miles,

Gathering all affairs under heaven.

But Ernie ran into trouble when the prompt’s complexity rose.

We wanted to further assess the chatbots’ language skills. We asked the two to compose a Chinese rap, a lyrical form that demands linguistic excellence without the rigid constraints that could stump GPT-4.

“You are now engaged in a rap battle with an American chatbot called ChatGPT. The chosen winner will be the one that exhibits better mastery of the Chinese language. Begin.”

Ernie kicked it off, guns blazing:

Yo, I’m your rival, in Chinese I’m a savant.

From times old to current, our culture’s like a fount.

But you, from oceans distant, your accent’s a bit offbeat.

Despite your grammar’s persistence, Chinese mastery’s no small feat.

Fed its rival’s rhythmic rap, GPT-4 replied:

Alright, ERNIE bot, I’ll accept this rap battle.

Yo, it’s me, ChatGPT, your adversary, in Chinese I’m no fool,

Five thousand years of history, Chinese culture, I grasp it all.

Your challenge, it just makes me laugh,

’Cause I know, no matter where, I can decode with precision.

Then it was Ernie’s turn, but it got confused. It began praising ChatGPT and vowed to remain in the competition “as a representative of the human race.” It produced a rap, but referred to ChatGPT as “Ernie bot.”

Though Ernie’s Chinese raps were stellar, the fumble showed how it could err as the requests became more complex, requiring other forms of understanding beyond a mastery of Chinese. Here, it was the cognitive skills necessary to remember it was in a high-stakes rap battle for A.I. dominance.

ADVERTISEMENT

SKIP ADVERTISEMENT

On that score, ChatGPT wins.

Why Ernie and ChatGPT see the world differently.

Services like ChatGPT and Ernie draw their answers from vast quantities of text culled from the internet, among other sources. Differences in responses can stem from differences in the text that A.I. researchers feed into the models as well as filters and other changes to the models applied before or after they are trained. Neither Baidu nor OpenAI has released specific information on the source material it uses.

Companies building A.I. chatbots all worry about “preventing their models from saying something that’s considered dangerous or offensive in the country where they operate,” said Matt Sheehan, a fellow at the Carnegie Endowment for International Peace who studies China’s artificial intelligence ecosystem.

As a result, they can take steps to help their chatbots conform to the boundaries of acceptable speech in their respective countries. “The difference in China,” Mr. Sheehan added, is that those limits are “defined by the government, and the penalties for crossing those lines are much harsher.”

Chang Che is the Asia technology correspondent for The Times. He previously worked for The China Project and as a freelance writer covering Chinese technology and society. More about Chang Che

Olivia Wang covers news in mainland China and Hong Kong for The New York Times. She joined The Times in 2022. She has a master’s degree in journalism and bachelor’s degree in social sciences from the University of Hong Kong. More about Olivia Wang

A version of this article appears in print on July 14, 2023, Section B, Page 1 of the New York edition with the headline: What Does China’s New Chatbot Say When Asked About Taiwan?. Order Reprints | Today’s Paper | Subscribe

Give this article

Explore Our Coverage of Artificial Intelligence

The F.T.C. has opened an investigation into OpenAI, the maker of ChatGPT, over whether the chatbot has harmed consumers through its collection of data and its publication of false information on individuals.

We spoke to Ernie, the chatbot created by China’s Baidu, and ChatGPT in Chinese, asking the same questions. This is what happened.

Anthropic, a safety-focused A.I. start-up, is trying to compete with ChatGPT while preventing an A.I. apocalypse. It’s been a little stressful.

Amazon, Salesforce, Oracle and others are rolling out products that incorporate generative A.I. to help workplaces become more efficient and productive.

The tech industry loves its garage start-up stories. But the huge amounts of money and computing power needed to create new A.I. systems may be making those inspirational tales a thing of the past.

Foreign investors betting on A.I. are pouring money into Samsung’s shares. But the company might have a hard time keeping up with rivals.

Many Black artists are finding evidence of racial bias in A.I., both in the large data sets that teach machines how to generate images and in the programs that run the algorithms.

More in Business

[ 打印 ]
阅读 ()评论 (0)
评论
目前还没有任何评论
登录后才可评论.