在電腦自主性越來越強的時代,我們怎樣確保它們按照人類的意志行事呢?
That may sound like an abstract philosophical question, but it is also an urgent practical challenge, according to Stuart Russell, professor of computer science at the University of California, Berkeley, and one of the world’s leading thinkers on artificial intelligence.
這聽起來像是個抽象的哲學問題,但據(jù)加州大學伯克利分校(University of California, Berkeley)計算機科學教授、人工智能(AI)領域世界領先的思想家之一斯圖爾特•羅素(Stuart Russell)表示,它也是一個緊迫的現(xiàn)實挑戰(zhàn)。
It is all too easy to imagine scenarios in which increasingly powerful autonomous computer systems cause terrible real-world damage, either through thoughtless misuse or deliberate abuse, he says. Suppose, for example, in the not-too-distant future that a care robot is looking after your children. You are running late and ask the robot to prepare a meal. The robot opens the fridge, finds no food, calculates the nutritional value of your cat and serves up a feline fricassee.
他說,想象這樣的場景太容易了:或是因為無意錯用,抑或因為故意濫用,越來越強大的自主電腦系統(tǒng)給現(xiàn)實世界帶來可怕的破壞。例如,假設在不遠的將來,一個保育機器人負責照顧你的孩子。一天你快要遲到了,讓機器人幫你做飯。機器人打開冰箱,沒有找到食物,于是它計算了你的寵物貓的營養(yǎng)價值,最后做了一份燉貓肉。
Or take a more horrifying example of abuse that is technologically possible today. A terrorist group launches a swarm of bomb-carrying drones in a city and uses image recognition technology to kill everyone in a police uniform.
或者,再舉一個更可怕的濫用案例——這在今天的技術上是完全可能的。一個恐怖組織向某城市派出了一組攜帶炸彈的無人機,利用圖像識別技術殺死所有穿警察制服的人。
As Prof Russell argues in his latest book, Human Compatible, we need better ways of controlling what computers do to prevent them acting in anti-human ways, by default or by design. Although it may be many years, if not decades, away, we must also start thinking seriously about what happens if we ever achieve superhuman AI.
正如羅素教授在他的新書《人類兼容》(Human Compatible)中所寫,我們需要用更好的方法控制電腦,防止它們做出無意或有意的反人類行為。雖然這可能需要數(shù)年時間,甚至數(shù)十年,但我們必須開始認真思考:如果我們制造出了超越人類的人工智能,會發(fā)生什么事情?
Getting that ultimate control problem right could usher in a golden age of abundance. Getting it wrong could result in humanity’s extinction. Prof Russell fears it may take a Chernobyl-scale tragedy in AI to alert us to the vital importance of ensuring control.
這個如何控制人工智能的終極問題如能應對得當,將會引領我們進入一個富足的黃金時代。應對不當有可能造成人類的滅絕。羅素教授擔心,也許直到出現(xiàn)一個規(guī)模匹敵切爾諾貝利事件的人工智能悲劇,才能警示人們掌握控制權的至關重要性。
For the moment, the professor is something of an outlier in the AI community in sounding such alarms. Although he co-wrote a previous textbook on AI that is used by most universities around the world, Prof Russell is critical of what he calls the standard model of AI and the “denialism” of many in the industry.
目前,像羅素教授這樣發(fā)出這類警告的人在人工智能界不是主流。雖然羅素教授是現(xiàn)在世界上大多數(shù)大學使用的人工智能教科書的合著者之一,但他對所謂的人工智能標準模型和許多業(yè)內人士拒不接受現(xiàn)實的心理持批判態(tài)度。