英語閱讀 學(xué)英語,練聽力,上聽力課堂! 注冊 登錄
> 輕松閱讀 > 科學(xué)前沿 >  內(nèi)容

人工智能的失控風險

所屬教程:科學(xué)前沿

瀏覽:

2017年08月17日

手機版
掃描二維碼方便學(xué)習(xí)和分享
If I were to approach you brandishing a cattle prod, you might at first be amused. But, if I continued my advance with a fixed maniacal grin, you would probably retreat in shock, bewilderment and anger. As electrode meets flesh, I would expect a violent recoil plus expletives.

如果我揮舞著趕牛棒朝你走去,你剛開始可能會覺得搞笑。但如果我臉上掛著僵硬而癲狂的笑容朝你越走越近,你很可能會嚇得后退,心中充滿震驚、困惑和憤怒。當電極觸碰到肉體,可以想見,你會猛地往后縮,嘴上咒罵連連。

Given a particular input, one can often predict how a person will respond. That is not the case for the most intelligent machines in our midst. The creators of AlphaGo — a computer program built by Google’s DeepMind that decisively beat the world’s finest human player of the board game Go — admitted they could not have divined its winning moves. This unpredictability, also seen in the Facebook chatbots that were shut down after developing their own language, has stirred disquiet in the field of artificial intelligence.

當你對一個人做了什么事,對方會做出什么反應(yīng)往往是可以預(yù)測的。當今最智能的機器卻不是這樣。AlphaGo是谷歌(Google)下屬的DeepMind開發(fā)出的計算機程序,它在圍棋對弈中擊敗世界最頂尖的棋手。但AlphaGo的創(chuàng)造者們承認,他們無法推測它會走出什么勝招。這種不可預(yù)測性——同樣可見于Facebook的兩個聊天機器人,它們因為發(fā)展出自己的語言而被關(guān)閉了——引起了人工智能界的不安。

As we head into the age of autonomous systems, when we abdicate more decision-making to AI, technologists are urging deeper understanding of the mysterious zone between input and output. At a conference held at Surrey University last month, a team of coders from Bath University presented a paper revealing how even “designers have difficulty decoding the behaviour of their own robots simply by observing them”.

隨著我們邁入自主系統(tǒng)的時代、把更多決策工作交給人工智能,技術(shù)專家們開始敦促我們要更深入地理解“輸入”(input)和“輸出”(output)之間的神秘地帶。上月在薩里大學(xué)(Surrey University)舉辦的一次會議上,來自巴斯大學(xué)(Bath University)的一個編程團隊提交了一篇論文,透露就連“設(shè)計者,要僅憑觀察來破譯他們所研發(fā)的機器人的行為也有困難”。

The Bath researchers are championing the concept of “robot transparency” as an ethical requirement: users should be able to easily discern the intent and abilities of a machine. And when things go wrong — if, say, a driverless car mows down a pedestrian — a record of the car’s decisions should be accessible so that similar errors can be coded out.

巴斯大學(xué)的研究人員倡導(dǎo)把“機器人透明”列為一項倫理要求:用戶應(yīng)該能夠輕易辨識一部機器的意圖和能力。而在出事之后,比如說一輛無人駕駛汽車撞倒了一名行人,人們應(yīng)該能夠獲得關(guān)于該汽車所做決定的記錄,以便通過修改代碼來根除類似的錯誤。

Other roboticists, notably Professor Alan Winfield of Bristol Robotics Laboratory at the University of the West of England, have similarly called for “ethical black boxes” to be installed in robots and autonomous systems, to enhance public trust and accountability. These would work in exactly the same way as flight data recorders on aircraft: furnishing the sequence of decisions and actions that precede a failure.

其他機器人專家也呼吁在機器人和自動化系統(tǒng)上安裝“道德黑匣子”,以增強公眾信任,也有助于追究責任。其中最有名的要數(shù)西英格蘭大學(xué)(University of the West of England)布里斯托機器人實驗室(Bristol Robotics Laboratory)的艾倫•溫菲爾德教授(Alan Winfield)。這種黑匣子的作用就像飛機上記錄飛行數(shù)據(jù)的黑匣子:記錄機器失靈前做了哪些決定和行為。

Many autonomous systems, of course, are unseen: they lurk behind screens. Machine-learning algorithms, grinding mountains of data, can affect our success at securing loans and mortgages, at landing job interviews, and even at being granted parole.

當然,很多自主系統(tǒng)是看不見的:它們隱藏在屏幕背后。處理海量數(shù)據(jù)的機器學(xué)習(xí)算法對于我們能否申請到貸款、能否獲得面試機會,甚至能否獲得假釋都會有影響。

For that reason, says Sandra Wachter, a researcher in data ethics at Oxford university and the Alan Turing Institute, regulation should be discussed. While algorithms can correct for some biases, many are trained on already-skewed data. So a recruitment algorithm for management is likely to identify ideal candidates as male, white and middle-aged. “I am a woman in my early 30s,” she told Science, “so I would be filtered out immediately, even if I’m suitable . . . [and] sometimes algorithms are used to display job ads, so I wouldn’t even see the position is available.”

出于這個原因,牛津大學(xué)(Oxford University)和圖靈研究所(Alan Turing Institute)的數(shù)據(jù)倫理學(xué)研究員桑德拉•沃奇特(Sandra Wachter)認為,我們應(yīng)當討論制定相應(yīng)的監(jiān)管條例。雖然算法可以糾正某些偏見,但很多算法是以本來就扭曲的數(shù)據(jù)訓(xùn)練出來的。比如一款招聘管理人員的算法可能把理想的應(yīng)聘人選列為白人中年男性。“我是個30歲出頭的女人,”她告訴《科學(xué)》(Science)雜志,“所以我會立刻被過濾掉,即使我是合適的人選……有時候算法還會被用來顯示招聘廣告,所以我甚至看不到這個職位信息。”

The EU General Data Protection Regulation, due to come into force in May 2018, will offer the prospect of redress: individuals will be able to contest completely automated decisions that have legal or other serious consequences.

將于2018年5月生效的《歐盟一般數(shù)據(jù)保護條例》(EU General Data Protection Regulation)將提供糾正的空間:那些完全自動做出的決定如果造成法律或其他方面的嚴重后果,個人將能夠提出異議。

There is an existential reason for grasping precisely how data input becomes machine output — “the singularity”. This is the much-theorised point of runaway AI, when machine intelligence surpasses that of human creators. Machines could conceivably acquire the ability to shape and control the future on their own terms.

對于我們?yōu)槭裁葱枰莆諗?shù)據(jù)輸入是如何變成機器輸出的,有一個關(guān)乎人類生死存亡的理由,那就是“奇點”(singularity)。人們對奇點的概念進行了很多理論分析,它是指當機器的智能超過人類創(chuàng)造者的智力,人工智能失控的那個臨界點。可以想象,機器有可能獲得自做主張地塑造和控制未來的能力。

There need not be any premeditated malice for such a leap — only a lack of human oversight as AI programs, equipped with an ever-greater propensity to learn and the corresponding autonomy to act, begin to do things that we can no longer predict, understand or control. The development of AlphaGo suggests that machine learning has already mastered unpredictability, if only at one task. The singularity, should it materialise, promises a rather more chilling version of Game Over.

發(fā)生這樣的劇變不需要任何蓄謀的惡意;只需要當人工智能程序具備越來越強烈的學(xué)習(xí)傾向和相應(yīng)的自主行動能力,開始做一些我們再也無法預(yù)測、理解或控制的事情時,人類卻對它們?nèi)狈ΡO(jiān)督。AlphaGo的開發(fā)似乎表明,機器學(xué)習(xí)已經(jīng)造就了掌控不可預(yù)測性的高手,即便只是針對一種任務(wù)。倘若奇點真的出現(xiàn)了,Game Over可能會更加恐怖。
 


用戶搜索

瘋狂英語 英語語法 新概念英語 走遍美國 四級聽力 英語音標 英語入門 發(fā)音 美語 四級 新東方 七年級 賴世雄 zero是什么意思北京市草橋欣園三區(qū)英語學(xué)習(xí)交流群

  • 頻道推薦
  • |
  • 全站推薦
  • 推薦下載
  • 網(wǎng)站推薦