英語閱讀 學(xué)英語,練聽力,上聽力課堂! 注冊(cè) 登錄
> 輕松閱讀 > 科學(xué)前沿 >  內(nèi)容

FT社評(píng):為科技業(yè)試圖防范“終結(jié)者”點(diǎn)贊

所屬教程:科學(xué)前沿

瀏覽:

2017年08月25日

手機(jī)版
掃描二維碼方便學(xué)習(xí)和分享
“A robot may not injure a human being or, through inaction, allow a human being to come to harm.” Isaac Asimov’s precept formed the moral underpinning of his futuristic fiction; but 75 years after he first articulated his three laws of robotics, the first and crucial principle is being overtaken by reality.

“機(jī)器人不得傷害人類,或目睹人類個(gè)體將遭受危險(xiǎn)而袖手不管,”艾薩克•阿西莫夫(Isaac Asimov)的戒律奠定了其未來主義小說的道德基礎(chǔ);但在他首次明確表述“機(jī)器人三定律”的75年后,這條至關(guān)重要的第一原則正在被現(xiàn)實(shí)壓倒。

True, there are as yet no killer androids rampaging across the battlefield. But there are already defensive systems in place that can be programmed to detect and fire at threats — whether incoming missiles or approaching humans. The Pentagon has tested a swarm of miniature drones — raising the possibility that commanders could in future send clouds of skybots into enemy territory equipped to gather intelligence, block radar or — aided by face recognition technology — carry out assassinations. From China to Israel, Russia to Britain, many governments are keen to put rapid advances in artificial intelligence to military use.

沒錯(cuò),迄今為止還沒有“殺手機(jī)器人”馳騁在戰(zhàn)場(chǎng)上。但現(xiàn)在已經(jīng)出現(xiàn)了可用來探查威脅并向目標(biāo)——無論是飛來的導(dǎo)彈還是靠近的人類——開火的防御系統(tǒng)。五角大樓(Pentagon)測(cè)試了一批迷你無人機(jī)——它們帶來了一種可能性,即未來指揮官可派出一群群的skybot(空中機(jī)器人)進(jìn)入敵人領(lǐng)土,收集情報(bào)、阻斷雷達(dá)、或在人臉識(shí)別技術(shù)的輔助下完成刺殺任務(wù)。從中國(guó)到以色列、從俄羅斯到英國(guó),很多政府都急于把人工智能方面取得的快速進(jìn)展應(yīng)用于軍事用途。

This is a source of alarm to researchers and tech industry executives. Already under fire for the impact that disruptive technologies will have on society, they have no wish to see their commercial innovations adapted to devastating effect. Hence this week’s call from the founders of robotics and AI companies for the UN to take action to prevent an arms race in lethal autonomous weapons systems. In an open letter, they underline their concern that such technology could permit conflict “at a scale greater than ever”, could help repressive regimes quell dissent, or that weapons could be hacked “to behave in undesirable ways”.

對(duì)于研究人員和科技業(yè)高管來說,這種情況值得擔(dān)憂。他們已經(jīng)因顛覆性技術(shù)將對(duì)社會(huì)產(chǎn)生的影響而飽受抨擊,他們不希望看到自己的商業(yè)創(chuàng)新被改造后用于制造毀滅。因此,百余家機(jī)器人和人工智能企業(yè)的創(chuàng)始人日前聯(lián)合呼吁聯(lián)合國(guó)采取行動(dòng),阻止各國(guó)在致命性自主武器系統(tǒng)方面展開軍備競(jìng)賽。他們?cè)诠_信中強(qiáng)調(diào)了他們的擔(dān)憂,稱此類技術(shù)可能使沖突達(dá)到“前所未有的規(guī)模”、可能幫助專制政權(quán)壓制異見者,這些武器還可能因受到黑客攻擊而做出有害的行為。

Their concerns are well-founded, but attempts to regulate these weapons are fraught with ethical and practical difficulties. Those who support the increasing use of AI in warfare argue that it has the potential to lessen suffering, not only because fewer front line troops would be needed, but because intelligent weapon systems would be better able to minimise civilian casualties. Targeted strikes against militants would obviate the need for indiscriminate bombing of the kind seen in Falluja or, more recently, Mosul. And there would be many less contentious uses for AI — say, driverless convoys on roads vulnerable to ambush.

他們的顧慮是有根據(jù)的,但試圖控制這類武器在倫理和實(shí)踐方面都存在困難。那些支持在戰(zhàn)爭(zhēng)中更多使用人工智能的人認(rèn)為,此類技術(shù)有可能減少傷害,不只因?yàn)樗璨渴鸬那熬€部隊(duì)減少,也因?yàn)橹悄芪淦飨到y(tǒng)可以更好地減少平民傷亡。如果可以針對(duì)作戰(zhàn)人員展開目標(biāo)明確的打擊行動(dòng),也就不必進(jìn)行無差別的狂轟濫炸,從而可以避免費(fèi)盧杰(Falluja)或最近摩蘇爾(Mosul)發(fā)生的那種慘劇。人工智能還將開發(fā)出很多沒那么具有爭(zhēng)議的用途——比如說,在易受埋伏路段使用無人駕駛車隊(duì)。

At present, there is a broad consensus among governments against deploying fully autonomous weapons — systems that can select and engage targets with no meaningful human control. For the US military, this is a moral red line: there must always be a human operator responsible for a decision to kill. For others in the debate, it is a practical consideration — autonomous systems could behave unpredictably or be vulnerable to hacking.

目前,各國(guó)政府在反對(duì)部署全自主武器——這類武器可在沒有實(shí)際人為控制的情況下選擇目標(biāo)并向其進(jìn)攻——方面存在廣泛共識(shí)。對(duì)于美國(guó)軍方而言,有一條道德紅線:殺人的決定必須由人類操作者做出。對(duì)于爭(zhēng)論中的其他各方而言,存在一個(gè)現(xiàn)實(shí)的考量,即自主系統(tǒng)可能做出難以預(yù)測(cè)的舉動(dòng)、或容易受到黑客攻擊。

It becomes far harder to draw boundaries between systems with a human “in the loop” — in full control of a single drone, for example — and those where humans are “on the loop”, supervising and setting parameters for a broadly autonomous system. In the latter case — which might apply to anti-aircraft systems now, or to future drone swarms — it is arguable whether human oversight would amount to effective control in the heat of battle.

如今在“人在決策圈內(nèi)”的系統(tǒng)(例如完全控制一架無人機(jī))和“人在決策圈之上”的系統(tǒng)(人類監(jiān)督完全自主的系統(tǒng)并為之設(shè)定參數(shù))之間更難劃分界限了。后一種技術(shù)可能適用于如今的防空系統(tǒng)或未來的無人機(jī)群,但一個(gè)疑問是,當(dāng)戰(zhàn)斗進(jìn)入白熱化階段,人類監(jiān)督是否會(huì)形成有效的控制。

Existing humanitarian law helps to an extent. The obligations to distinguish between combatants and civilians, avoid indiscriminate attacks and weapons that cause unnecessary suffering still apply; and commanders must take responsibility when they deploy robots just as they do for the actions of servicemen and women.

現(xiàn)有的人道主義法則有一定的作用。人們有責(zé)任區(qū)分作戰(zhàn)人員和平民、避免無差別攻擊以及會(huì)造成不必要傷害的武器;當(dāng)指揮官像派遣士兵一樣部署機(jī)器人去執(zhí)行任務(wù)時(shí),他們必須承擔(dān)相應(yīng)的責(zé)任。

But the AI industry is right to call for clearer rules, no matter how hard it may be to frame and enforce them. Killer robots may remain the stuff of science fiction, but self-operating weapons are a fast-approaching reality.

但是人工智能行業(yè)呼吁制定更明確規(guī)則的做法是正確的,無論這類規(guī)則多難制定和執(zhí)行。“殺手機(jī)器人”可能仍然只存在于科幻小說中,但自主操作的武器即將成為現(xiàn)實(shí)。
 


用戶搜索

瘋狂英語 英語語法 新概念英語 走遍美國(guó) 四級(jí)聽力 英語音標(biāo) 英語入門 發(fā)音 美語 四級(jí) 新東方 七年級(jí) 賴世雄 zero是什么意思商丘市龍鳳盛世名城(鳳城大道)英語學(xué)習(xí)交流群

  • 頻道推薦
  • |
  • 全站推薦
  • 推薦下載
  • 網(wǎng)站推薦