考博英語外刊雙語閱讀丨ChatGPT:The Economist《經(jīng)濟(jì)學(xué)人》

考博英語 責(zé)任編輯:唐琳 2023-07-26

摘要:考博英語備考過程中,養(yǎng)成長期的雙語閱讀習(xí)慣,不僅對閱讀理解有所幫助,還能潛移默化提高翻譯水平,更可以鍛煉英語思維,打開格局和眼界。

考博英語備考過程中,養(yǎng)成長期的雙語閱讀習(xí)慣,不僅對閱讀理解有所幫助,還能潛移默化提高翻譯水平,更可以鍛煉英語思維,打開格局和眼界。好的內(nèi)容,記得和博友們分享哦,歡迎持續(xù)關(guān)注~祝愿同學(xué)們事業(yè)順利,家庭幸福,一戰(zhàn)成博!

導(dǎo)讀:2022年11月,OpenAI公司開發(fā)的開放式人工智能網(wǎng)站ChatGPT進(jìn)入了大家的視野。這也是23級所有考博復(fù)試同學(xué)都必須準(zhǔn)備的一個(gè)問題———如何看待chat GPT爆火?帶著這個(gè)疑問,一起進(jìn)入今天的外刊雙語閱讀吧!

ChatGPT raises a debate over how human learn language

題材:科普類

出處:The Economist《經(jīng)濟(jì)學(xué)人》

When deep blue, a chess computer, defeated Garry Kasparov, a world champion, in 1997 many gasped in fear of machines triumphing over mankind. In the intervening years, artificial intelligence has done some astonishing things, but none has managed to capture the public imagination in quite the same way. Now, though, the astonishment of the Deep Blue moment is back, because computers are employing something that humans consider their defining ability: language.

譯文:1997年,當(dāng)國際象棋計(jì)算機(jī)深藍(lán)擊敗世界冠軍加里·卡斯帕羅夫時(shí),許多人因害怕機(jī)器戰(zhàn)勝人類而倒吸一口冷氣。在這幾年里,人工智能做了一些令人驚訝的事情,但沒有一件能以完全相同的方式引發(fā)公眾的想象力。然而,現(xiàn)在,深藍(lán)時(shí)刻又回來了,因?yàn)橛?jì)算機(jī)正在使用人類認(rèn)為自己具有定義能力的東西:語言。

Or are they? Certainly, large language models (LLMs), of which the most famous is ChatGPT, produce what looks like impeccable human writing. But a debate has ensued about what the machines are actually doing internally, what it is that humans, in turn, do when they speak—and, inside the academy, about the theories of the world’s most famous linguist, Noam Chomsky.

譯文:真的是嗎?當(dāng)然,大型語言模型 (LLMs),其中最著名的是ChatGPT,可以產(chǎn)生看起來無可挑剔的人類寫作。但是隨之而來的爭論是,機(jī)器內(nèi)部到底是怎么運(yùn)作的,反過來,人類在說話時(shí)人體內(nèi)部又在做什么,而在學(xué)術(shù)界,爭論的焦點(diǎn)是世界上最著名的語言學(xué)家諾姆·喬姆斯基的理論。

Although Professor Chomsky’s ideas have changed considerably since he rose to prominence in the 1950s, several elements have remained fairly constant. He and his followers argue that human language is different in kind (not just degree of expressiveness) from all other kinds of communication. All human languages are more similar to each other than they are to, say, whale song or computer code. Professor Chomsky has frequently said a Martian visitor would conclude that all humans speak the same language, with surface variation.

譯文:盡管喬姆斯基教授自20世紀(jì)50年代成名以來,他的思想發(fā)生了很大的變化,但有幾個(gè)方面卻在很大程度上保持不變。他和他的追隨者認(rèn)為,人類語言與所有其他形式的交流在種類上(不僅僅是表達(dá)程度)是不同的。所有的人類語言彼此之間的相似性比鯨魚的歌聲或計(jì)算機(jī)代碼更大。喬姆斯基教授經(jīng)常說,火星訪客會(huì)得出結(jié)論,即所有人都說同一種語言,只是表面上有所不同。

Perhaps most notably, Chomskyan theories hold that children learn their native languages with astonishing speed and ease despite “the poverty of the stimulus”: the sloppy and occasional language they hear in childhood. The only explanation for this can be that some kind of predisposition for language is built into the human brain.

譯文:也許最值得注意的是,喬姆斯基的理論認(rèn)為,盡管“缺乏刺激”: 他們在童年時(shí)聽到的是草率和偶然的語言,但兒童學(xué)習(xí)母語的速度和輕松程度是驚人的。對此的唯一解釋是,人類大腦中存在某種語言傾向。

Chomskyan ideas have dominated the linguistic field of syntax since their birth. But many linguists are strident anti-Chomskyans. And some are now seizing on the capacities of LLMs to attack Chomskyan theories anew.

譯文:喬姆斯基的思想自誕生之日起就占據(jù)了語言學(xué)領(lǐng)域的主導(dǎo)地位。但許多語言學(xué)家都是強(qiáng)硬的反喬姆斯基主義者。一些人現(xiàn)在正利用LLMs的能力,重新攻擊喬姆斯基的理論。

Grammar has a hierarchical, nested structure involving units within other units. Words form phrases, which form clauses, which form sentences and so on. Chomskyan theory posits a mental operation, “Merge”, which glues smaller units together to form larger ones that can then be operated on further (and so on). In a recent New York Times op-ed, the man himself (now 94) and two co-authors said “we know” that computers do not think or use language as humans do, referring implicitly to this kind of cognition. LLMs, in effect, merely predict the next word in a string of words.

譯文:語法具有層次結(jié)構(gòu),嵌套結(jié)構(gòu),涉及單元之間的單元。單詞組成短語,短語組成分句,分句組成句子,以此類推。喬姆斯基的理論假定了一種心理操作,即“合并”,它將較小的單元粘合在一起,形成更大的單元,然后可以進(jìn)一步操作(以此類推)。在《紐約時(shí)報(bào)》最近的一篇專欄文章中,他本人(現(xiàn)年94歲)和兩位合著者表示,“我們知道”計(jì)算機(jī)不像人類那樣思考或使用語言,暗指這種認(rèn)知。LLMs實(shí)際上只是預(yù)測單詞串中的下一個(gè)單詞。

Yet it is hard, for several reasons, to fathom what LLMs “think”. Details of the programming and training data of commercial ones like ChatGPT are proprietary. And not even the programmers know exactly what is going on inside.

譯文:然而,由于一些原因,很難理解LLMs在“思考”什么。像ChatGPT這樣的商業(yè)軟件的編程細(xì)節(jié)和訓(xùn)練數(shù)據(jù)是專利的。甚至連程序員都不知道里面到底發(fā)生了什么。

Linguists have, however, found clever ways to testLLMs’ underlying knowledge, in effect tricking them with probing tests. And indeed, LLMs seem to learn nested, hierarchical grammatical structures, even though they are exposed to only linear input, ie, strings of text. They can handle novel words and grasp parts of speech. Tell ChatGPT that “dax” is a verb meaning to eat a slice of pizza by folding it, and the system deploys it easily: “After a long day at work, I like to relax and dax on a slice of pizza while watching my favourite TV show.” (The imitative element can be seen in “dax on”, which ChatGPT probably patterned on the likes of “chew on” or “munch on”.)

譯文:然而,語言學(xué)家已經(jīng)找到了一些聰明的方法來測試LLMs的基礎(chǔ)知識(shí),實(shí)際上是用探查性的測試來欺騙他們。事實(shí)上,LLMs似乎可以學(xué)習(xí)嵌套的、分層的語法結(jié)構(gòu),即使他們只接觸線性輸入,即文本字符串。他們能處理新單詞,掌握部分詞性。告訴ChatGPT,“dax”是一個(gè)動(dòng)詞,意思是把一片披薩折疊起來吃,系統(tǒng)很容易就能把它應(yīng)用起來:“在漫長的一天工作之后,我喜歡放松一下,一邊看我最喜歡的電視節(jié)目,一邊咀嚼披薩。”(模仿元素可以在“dax on”中看到,ChatGPT可能模仿了“chew on”或“munch on”之類的單詞。)

What about the “poverty of the stimulus”? After all, GPT-3 (the LLM underlying ChatGPT until the recent release of GPT-4) is estimated to be trained on about 1,000 times the data a human ten-year-old is exposed to. That leaves open the possibility that children have an inborn tendency to grammar, making them far more proficient than any LLM. In a forthcoming paper in Linguistic Inquiry, researchers claim to have trained an LLM on no more text than a human child is exposed to, finding that it can use even rare bits of grammar. But other researchers have tried to train an LLM on a database of only child-directed language (that is, of transcripts of carers speaking to children). Here LLMs fare far worse. Perhaps the brain really is built for language, as Professor Chomsky says.

譯文:那么“缺乏刺激”呢? 畢竟,據(jù)估計(jì),GPT-3(在最近發(fā)布GPT-4之前,基于LLM的ChatGPT)所接受的訓(xùn)練數(shù)據(jù)大約是10歲兒童所接觸數(shù)據(jù)的1000倍。這就留下了一種可能性,即孩子們天生就有語法傾向,這使得他們比任何LLM都要精通得多。在即將發(fā)表在《語言學(xué)探究》(Linguistic Inquiry)上的一篇論文中,研究人員聲稱,他們訓(xùn)練的LLM學(xué)習(xí)的文本并不比人類兒童接觸的文本多,他們發(fā)現(xiàn),LLM甚至可以使用一些罕見的語法。但其他研究人員已經(jīng)嘗試在一個(gè)僅針對兒童的語言數(shù)據(jù)庫(即看護(hù)人與兒童交談的文字記錄)上訓(xùn)練LLM。在這里,LLMs的處境要糟糕得多。也許正如喬姆斯基教授所說,大腦真的是為語言而生的。

It is difficult to judge. Both sides of the argument are marshalling LLMs to make their case. The eponymous founder of his school of linguistics has offered only a brusqueriposte. For his theories to survive this challenge, his camp will have to put up a stronger defence.

譯文:這很難判斷。爭論的雙方都在召集LLMs來證明自己的觀點(diǎn)。他的語言學(xué)學(xué)派的同名創(chuàng)始人只給出了一個(gè)無禮的反駁。為了讓他的理論經(jīng)受住挑戰(zhàn),他的陣營必須建立更強(qiáng)大的防御。

外刊閱讀需要一定量的積累,才有可能看到自己質(zhì)的飛躍。一般來說,至少要精讀三個(gè)月以上才能看到明顯的進(jìn)步,第一次讀不懂沒有關(guān)系,一個(gè)月讀起來還是很生硬,也沒有關(guān)系,你不需要一開始就很厲害,小賽會(huì)一直陪伴大家,一起進(jìn)步~

更多資料
更多課程
更多真題
溫馨提示:因考試政策、內(nèi)容不斷變化與調(diào)整,本網(wǎng)站提供的以上信息僅供參考,如有異議,請考生以權(quán)威部門公布的內(nèi)容為準(zhǔn)!

考博英語備考資料免費(fèi)領(lǐng)取

去領(lǐng)取

專注在線職業(yè)教育23年

項(xiàng)目管理

信息系統(tǒng)項(xiàng)目管理師

廠商認(rèn)證

信息系統(tǒng)項(xiàng)目管理師

信息系統(tǒng)項(xiàng)目管理師

信息系統(tǒng)項(xiàng)目管理師

學(xué)歷提升

!
咨詢在線老師!