We humans are fallible, imperfect beings prone to making mistakes and basic errors of judgment. So we create technologies to help ourselves out.
我們?nèi)祟惾菀追稿e(cuò),不完美,容易犯錯(cuò)和出現(xiàn)基本的判斷失誤。因此,我們發(fā)明技術(shù)來(lái)幫助自己。
Take plans to launch fleets of self-driving cars. The belief here is that the technology will improve safety. But this is a very bold assumption to make at this stage in its development.
以推出自動(dòng)駕駛汽車的計(jì)劃為例。人們相信,這項(xiàng)技術(shù)將加強(qiáng)安全。但在當(dāng)前發(fā)展階段,這是一個(gè)非常大膽的假設(shè)。
For one thing, the evidence to support the view simply is not there yet. The best data we have come from tests conducted over the course of 2016 in California, a state which, it must be remembered, boasts a mild climate hardly representative of global driving conditions.
首先,支持這種觀點(diǎn)的證據(jù)并不存在。我們擁有的最好數(shù)據(jù)來(lái)自加州在2016年期間進(jìn)行的測(cè)試。必須記住,這個(gè)州擁有溫暖的氣候,很難代表全球的駕駛狀況。
Google’s Waymo scored best, with one human intervention every 5,127 miles driven. This was an improvement on the year before, but nowhere near perfect. In all, Waymo’s 60 testing vehicles drove about 10,597 miles in 2016 — 3,000 miles less than the annual US average per vehicle — and within that timeframe required at least two interventions each.
谷歌(Google)的Waymo得分最高,每駕駛5127英里需要一次人為干預(yù)。這比上一年有所改善,但遠(yuǎn)遠(yuǎn)算不上完美。2016年,Waymo的60輛測(cè)試汽車的駕駛總里程約為1.0597萬(wàn)英里(比美國(guó)每輛汽車的年度平均里程少3000英里),其間每輛汽車需要至少兩次干預(yù)。
Tesla fared much worse. The electric vehicle maker’s four cars tested on average 137 miles each that year, encountering 45 disengagement events per vehicle — or one roughly every three miles. Each intervention represents an accident which was potentially avoided.
特斯拉(Tesla)的表現(xiàn)糟糕得多。去年,這家電動(dòng)汽車制造商有4輛汽車接受測(cè)試,每輛汽車的平均行駛里程為137英里,每輛汽車發(fā)生45次人為干預(yù),大約每3英里一次。每次干預(yù)代表著一起潛在被避免的事故。
Given that most industry watchers believe the public will not tolerate any faults at all, none of this is encouraging. It is certainly true that the technology is improving. But it is also the case that self-driving cars have been around since the mid- 1990s, with one vehicle achieving a 98.2 per cent “autonomous driving percentage” even back then.
鑒于多數(shù)行業(yè)觀察者認(rèn)為公眾不會(huì)容忍任何故障,因此這些結(jié)果都不振奮人心。相關(guān)技術(shù)確實(shí)在改進(jìn)。但還有一個(gè)事實(shí),自動(dòng)駕駛汽車早在上世紀(jì)90年代中期就已出現(xiàn),甚至當(dāng)時(shí)就有一款汽車的“自動(dòng)駕駛比例”達(dá)到了98.2%。
But even if the technical challenges can be surmounted, unexpected negative externalities probably cannot.
然而,即便能夠克服技術(shù)挑戰(zhàn),意料之外的負(fù)面外部性也很可能難以逾越。
A case in point is Uber’s latest autonomous vehicle accident in Arizona where the fault lay with the human driver of another car not Uber’s vehicle. In this case the human failed to yield, drawing attention to one of the biggest challenges for the forthcoming autonomous transition: a world in which humans and autonomous vehicles will have to interact with each other safely.
優(yōu)步(Uber)在亞利桑那州的最新自動(dòng)駕駛汽車事故恰恰說(shuō)明了這點(diǎn),事故責(zé)任方是另一輛汽車的人類駕駛員,而非優(yōu)步的汽車。在這個(gè)例子里,責(zé)任人沒(méi)有讓路,這令人關(guān)注即將到來(lái)的自動(dòng)化過(guò)渡的最大挑戰(zhàn)之一:人類和自動(dòng)駕駛汽車必須能夠安全互動(dòng)。
What motivates humans to act, however, is very different to what motivates algorithms. At the basic level, most drivers — save those who are drunk, suicidal or intent on sowing fear or terror — have an interest in their own self-preservation or the preservation of others. That cannot be guaranteed of complex algorithms.
然而,推動(dòng)人類行動(dòng)的因素與算法的運(yùn)行迥然不同。在基本層面,多數(shù)駕駛者(除了那些醉酒、有自殺傾向或者蓄意制造害怕或恐懼的人)都有意保護(hù)自己,保護(hù)其他人。復(fù)雜的算法并不能保證會(huì)如此。
There are exceptions, such as last week’s terror attack in London. But eliminating humans from the wheel will not necessarily reduce the risks. Cars that drive themselves could be easily weaponised, since they need only to be hacked, not driven by a martyr.
也有例外,例如最近倫敦發(fā)生的恐怖襲擊。但是淘汰人類駕駛員不一定會(huì)減輕風(fēng)險(xiǎn)。那些自動(dòng)駕駛汽車可能很容易變成武器,因?yàn)樗麄冎恍枰缓诳腿肭?,而不需要由恐怖分子駕駛。
Alcoholism, meanwhile, killed three times the number of Americans that fatal crashes did in 2014. So if driving encourages sobriety, another unintended consequence could be the rise of alcohol and drug-abuse when humans are freed of that responsibility.
與此同時(shí),2014年,因酒精中毒導(dǎo)致死亡的美國(guó)人數(shù)量是致命車禍的3倍。因此,如果駕車會(huì)鼓勵(lì)人們保持清醒,另一個(gè)意外后果可能是當(dāng)人類擺脫了這種責(zé)任后,濫用酒精和藥物的人數(shù)會(huì)上升。
Then there is the trust we have to put in the coders. Normally in the corporate world employers devise elaborate reward and penalty programmes to ensure that workers are incentivised to do the best possible job, even when it’s in their interests to take short-cuts. They are held accountable. In sectors where human sloppiness can have a disproportionately bad effect on others — banking, say, or air traffic control — these sorts of incentives matter even more.
還有就是我們不得不相信程序員。通常,在企業(yè)界,雇主會(huì)精心設(shè)計(jì)獎(jiǎng)懲計(jì)劃,以確保員工有動(dòng)力把工作做得盡可能好,即便偷工減料對(duì)他們有利。他們會(huì)被追究責(zé)任。在人類草率行為可能對(duì)他人造成特別糟糕影響的領(lǐng)域(例如銀行業(yè)或空中交通管制),這種激勵(lì)更為重要。
Self-driving cars, however, will be programmed and maintained by coder armies benefiting from safety in numbers when it comes to accountability. Can we be sure that they will always be properly motivated?
然而,自動(dòng)駕駛汽車將由程序員大軍編程和維護(hù),就問(wèn)責(zé)機(jī)制而言,這些程序員受益于他們?nèi)藬?shù)眾多。我們能夠確定他們會(huì)一直得到妥善激勵(lì)嗎?
Finally, while there is a good case for self-driving technology to augment human driving skills, there is a risk this could lead to a degradation of those abilities. And we surely would not want to risk discovering that they were no longer there when we really needed them.
最后,盡管自動(dòng)駕駛技術(shù)將增強(qiáng)人類駕駛技能的說(shuō)法理由充足,但風(fēng)險(xiǎn)在于這可能導(dǎo)致人類能力的退化。我們肯定不想冒這個(gè)險(xiǎn):在我們真正需要這些能力時(shí),卻發(fā)現(xiàn)它們不存在了。
瘋狂英語(yǔ) 英語(yǔ)語(yǔ)法 新概念英語(yǔ) 走遍美國(guó) 四級(jí)聽(tīng)力 英語(yǔ)音標(biāo) 英語(yǔ)入門 發(fā)音 美語(yǔ) 四級(jí) 新東方 七年級(jí) 賴世雄 zero是什么意思佛山市盈生林上灣英語(yǔ)學(xué)習(xí)交流群