英語(yǔ)演講 學(xué)英語(yǔ),練聽力,上聽力課堂! 注冊(cè) 登錄
> 英語(yǔ)演講 > 英語(yǔ)演講mp3 > TED音頻 >  第109篇

演講MP3+雙語(yǔ)文稿:如何把互聯(lián)網(wǎng)變成一個(gè)值得信賴的地方?

所屬教程:TED音頻

瀏覽:

2022年05月07日

手機(jī)版
掃描二維碼方便學(xué)習(xí)和分享
https://online2.tingclass.net/lesson/shi0529/10000/10387/tedyp110.mp3
https://image.tingclass.net/statics/js/2012

聽力課堂TED音頻欄目主要包括TED演講的音頻MP3及中英雙語(yǔ)文稿,供各位英語(yǔ)愛好者學(xué)習(xí)使用。本文主要內(nèi)容為演講MP3+雙語(yǔ)文稿:如何把互聯(lián)網(wǎng)變成一個(gè)值得信賴的地方?,希望你會(huì)喜歡!

【演講者及介紹】Claire Wardle

克萊爾·沃德爾是用戶生成內(nèi)容和驗(yàn)證方面的專家,她的工作是幫助提高在線信息的質(zhì)量。

【演講主題】如何幫助把互聯(lián)網(wǎng)變成一個(gè)值得信賴的地方?

【中英文字幕】

翻譯者Nan Yang 校對(duì)者Jiasi Hao

00:01

No matter who you are or where you live,I'm guessing that you have at least one relative that likes to forward thoseemails. You know the ones I'm talking about -- the ones with dubious claims orconspiracy videos.

無論你是誰或者住在哪,我猜你們身邊至少有一位喜歡轉(zhuǎn)發(fā)那些電子郵件的親戚。你們知道我說的那些郵件是帶有可疑聲明或錄像的郵件。

00:48

And if you spend as enough time as I havelooking at misinformation, you know that this is just one example of many thattaps into people's deepest fears and vulnerabilities.

而且如果你們花費(fèi)和我一樣多的時(shí)間來看這個(gè)誤傳,你們會(huì)知道這只是眾多利用人們最深層的恐懼和脆弱的例子之一。

01:11

Every day, across the world, we see scoresof new memes on Instagram encouraging parents not to vacte their children.We see new videos on YouTube explaining that climate change is a hoax. Andacross all platforms, we see endless posts designed to demonize others on thebasis of their race, religion or sexuality.

每一天,在世界范圍內(nèi),我們都能看到Ins上出現(xiàn)的大量新表情包在鼓勵(lì)父母不要給孩子接種疫苗。我們看到Y(jié)outTube上的新視頻在解釋說氣候變化問題是個(gè)騙局。在所有平臺(tái)上,我們看見了因?yàn)榉N族、宗教或性取向的不同,而妖魔化他人的無窮無盡的帖子。

01:32

Welcome to one of the central challenges ofour time. How can we maintain an internet with freedom of expression at thecore, while also ensuring that the content that's being disseminated doesn'tcause irreparable harms to our democracies, our communities and to our physicaland mental well-being? Because we live in the information age, yet the centralcurrency upon which we all depend -- information -- is no longer deemedentirely trustworthy and, at times, can appear downright dangerous. This isthanks in part to the runaway growth of social sharing platforms that allow usto scroll through, where lies and facts sit side by side, but with none of thetraditional signals of trustworthiness.

歡迎來到我們這個(gè)時(shí)代的主要挑戰(zhàn)之一。我們?nèi)绾卧诰S護(hù)互聯(lián)網(wǎng)核心的同時(shí),也能確保正在傳播的內(nèi)容不會(huì)對(duì)我們的民主、我們的社區(qū)和我們的身心健康造成不可彌補(bǔ)的傷害?因?yàn)槲覀兩钤谛畔r(shí)代,但是我們所有人都依賴的中央貨幣——信息——不再完全值得信賴,而且有時(shí)會(huì)顯得非常危險(xiǎn)。這部分“歸功”于社交共享媒體的迅猛發(fā)展,讓我們可以在謊言和真相并存的世界里滑屏瀏覽,但是沒有任何傳統(tǒng)的可信賴的信號(hào)。

02:12

And goodness -- our language around this ishorribly muddled. People are still obsessed with the phrase "fakenews," despite the fact that it's extraordinarily unhelpful and used todescribe a number of things that are actually very different: lies, rumors,hoaxes, conspiracies, propaganda. And I really wish we could stop using aphrase that's been co-opted by politicians right around the world, from theleft and the right, used as a weapon to attack a free and independent press.

天啊——我們關(guān)于這方面的語(yǔ)言極其混亂。人們?nèi)匀怀撩杂凇凹傩侣劇边@一短語(yǔ),盡管事實(shí)是,該短語(yǔ)毫無幫助,并且被用來描述一系列實(shí)際上非常不同的東西:謊言、謠言、惡作劇、陰謀、宣傳鼓吹……我非常希望我們可以停止使用這一世界各地的政治家共同選擇的一個(gè)短語(yǔ),停止將它作為武器來攻擊自由和獨(dú)立的新聞媒體。

02:40

(Applause)

(掌聲)

02:45

Because we need our professional news medianow more than ever. And besides, most of this content doesn't even masqueradeas news. It's memes, videos, social posts. And most of it is not fake; it'smisleading. We tend to fixate on what's true or false. But the biggest concernis actually the weaponization of context. Because the most effectivedisinformation has always been that which has a kernel of truth to it.

因?yàn)槲覀儽纫酝魏螘r(shí)候都更需要我們的專業(yè)新聞媒體。而且除此之外,大多數(shù)內(nèi)容甚至都沒有被偽裝成新聞,而是以表情包、視頻、社交帖子的形式存在。而且大部分內(nèi)容不是假的,而是誤導(dǎo)。我們傾向于專注在真假上。但是最大的擔(dān)憂實(shí)際上是語(yǔ)境的武器化。因?yàn)樽钣行У奶摷傩畔⒁恢笔蔷哂姓鎸?shí)內(nèi)核的那些。

04:36

So I'd like to explain three interlockingissues that make this so complex and then think about some ways we can considerthese challenges. First, we just don't have a rational relationship toinformation, we have an emotional one. It's just not true that more facts willmake everything OK, because the algorithms that determine what content we see,well, they're designed to reward our emotional responses. And when we'refearful, oversimplified narratives, conspiratorial explanations and languagethat demonizes others is far more effective. And besides, many of thesecompanies, their business model is attached to attention, which means thesealgorithms will always be skewed towards emotion.

所以我想解釋一下讓這事變得如此復(fù)雜的三個(gè)環(huán)環(huán)相扣的問題,然后琢磨出些方法來思考這些挑戰(zhàn)。首先,我們只是沒有與信息建立起理性關(guān)系,我們的關(guān)系是感性的。并非更多的事實(shí)真相會(huì)讓一切順利,因?yàn)闆Q定我們能看見什么內(nèi)容的算法,是被設(shè)計(jì)來獎(jiǎng)勵(lì)我們的情感反應(yīng)。而當(dāng)我們恐懼時(shí),過于簡(jiǎn)化的敘述、陰謀論解釋,和妖魔化事物的語(yǔ)言更加有效。此外,有很多公司,他們的商業(yè)模式與人們的關(guān)注度息息相關(guān),這意味著這些算法總是會(huì)偏向情感。

05:18

Second, most of the speech I'm talkingabout here is legal. It would be a different matter if I was talking aboutchild sexual abuse imagery or content that incites violence. It can beperfectly legal to post an outright lie. But people keep talking about takingdown "problematic" or "harmful" content, but with no cleardefinition of what they mean by that, including Mark Zuckerberg, who recentlycalled for global regulation to moderate speech. And my concern is that we'reseeing governments right around the world rolling out hasty policy decisionsthat might actually trigger much more serious consequences when it comes to ourspeech. And even if we could decide which speech to take up or take down, we'venever had so much speech. Every second, millions of pieces of content areuploaded by people right around the world in different languages, drawing onthousands of different cultural contexts. We've simply never had effectivemechanisms to moderate speech at this scale, whether powered by humans or bytechnology.

第二,我現(xiàn)在談?wù)摰拇蠖鄶?shù)言論是合法的。如果我在說的是兒童性虐待圖片,或者是煽動(dòng)暴力的內(nèi)容,就是另外一回事。公開撒謊可以是完全合法的。但是人們一直在討論撤下“有問題的”或“有害的”內(nèi)容,但是并沒有對(duì)它們是什么有明確的定義,包括馬克·扎克伯格,他最近呼吁全球監(jiān)管來緩和言論。而我的擔(dān)心是,我們看見了世界各地的政府推出倉(cāng)促的政策決定,但是它可能實(shí)際上觸發(fā)了對(duì)于我們的言論更嚴(yán)重的后果。而且即使我們可以決定哪個(gè)言論留住或撤下來,我們從來沒有過現(xiàn)在這么多的言論。每一秒,幾百萬的內(nèi)容,以不同的語(yǔ)言被世界各地的人上傳,吸取了上千種不同的文化背景。我們根本沒有有效的機(jī)制,無論是通過人工還是技術(shù)手段去緩和這種規(guī)模的言論內(nèi)容,

06:18

And third, these companies -- Google,Twitter, Facebook, WhatsApp -- they're part of a wider information ecosystem.We like to lay all the blame at their feet, but the truth is, the mass mediaand elected officials can also play an equal role in amplifying rumors andconspiracies when they want to. As can we, when we mindlessly forward divisiveor misleading content without trying. We're adding to the pollution.

然后第三,這些公司——谷歌,推特,臉書,WhatsApp——它們是廣闊的信息生態(tài)系統(tǒng)的一部分。我們喜歡把所有責(zé)任都推到他們身上,但事實(shí)是,大眾媒體和民選官員只要他們想,也可以在擴(kuò)大謠言和陰謀上發(fā)揮同等作用。我們也一樣,當(dāng)我們漫不經(jīng)心地轉(zhuǎn)發(fā)分裂性或誤導(dǎo)性的內(nèi)容時(shí),甚至沒有費(fèi)力。我們正在加劇這種“污染”。

06:45

I know we're all looking for an easy fix.But there just isn't one. Any solution will have to be rolled out at a massivescale, internet scale, and yes, the platforms, they're used to operating atthat level. But can and should we allow them to fix these problems? They'recertainly trying. But most of us would agree that, actually, we don't wantglobal corporations to be the guardians of truth and fairness online. And Ialso think the platforms would agree with that. And at the moment, they'remarking their own homework. They like to tell us that the interventions they'rerolling out are working, but because they write their own transparency reports,there's no way for us to independently verify what's actually happening.

我知道我們都在尋找簡(jiǎn)單的解決方法。但就是一個(gè)都沒有。任何解決方案都必須以大規(guī)模推出,互聯(lián)網(wǎng)的規(guī)模,而且的確,這些平臺(tái)已經(jīng)習(xí)慣在那種級(jí)別的規(guī)模上運(yùn)行。但是我們可以并應(yīng)該允許他們來解決這些問題嗎?他們當(dāng)然在努力。但是我們大多數(shù)人都會(huì)同意,實(shí)際上,我們不希望跨國(guó)公司成為網(wǎng)上真理與公平的守護(hù)者。我也認(rèn)為這些平臺(tái)也會(huì)同意。此時(shí)此刻,他們正在展現(xiàn)屬于自己的成果。他們想告訴我們他們實(shí)施的干預(yù)措施正在奏效,但是因?yàn)樗麄兙帉懙氖撬麄冏约旱耐该鞫葓?bào)告,我們無法獨(dú)立驗(yàn)證實(shí)際的情況。

07:26

(Applause)

(掌聲)

07:29

And let's also be clear that most of thechanges we see only happen after journalists undertake an investigation andfind evidence of bias or content that breaks their community guidelines. Soyes, these companies have to play a really important role in this process, butthey can't control it.

我們也要清楚,大多數(shù)我們看到的變化只發(fā)生在記者進(jìn)行了調(diào)查并找到了存在違反社區(qū)規(guī)則的偏見和內(nèi)容的證據(jù)之后。所以是的,這些公司必須在這個(gè)過程中扮演重要角色,但是他們無法控制它。

07:47

So what about governments? Many peoplebelieve that global regulation is our last hope in terms of cleaning up ourinformation ecosystem. But what I see are lawmakers who are struggling to keepup to date with the rapid changes in technology. And worse, they're working inthe dark, because they don't have access to data to understand what's happeningon these platforms. And anyway, which governments would we trust to do this? Weneed a global response, not a national one.

那么政府呢?很多人相信全球監(jiān)管是清理我們信息生態(tài)系統(tǒng)的最后希望。但是我看見的是正在努力跟上技術(shù)迅速變革的立法者。更糟的是,他們?cè)诤诎抵忻鞴ぷ?,因?yàn)樗麄儧]有獲取數(shù)據(jù)的權(quán)限來了解這些平臺(tái)上正在發(fā)生些什么。更何況,我們會(huì)相信哪個(gè)政府來做這件事呢?我們需要全球的回應(yīng),不是國(guó)家的。

08:15

So the missing link is us. It's thosepeople who use these technologies every day. Can we design a new infrastructureto support quality information? Well, I believe we can, and I've got a fewideas about what we might be able to actually do. So firstly, if we're seriousabout bringing the public into this, can we take some inspiration fromWikipedia? They've shown us what's possible. Yes, it's not perfect, but they'vedemonstrated that with the right structures, with a global outlook and lots andlots of transparency, you can build something that will earn the trust of mostpeople. Because we have to find a way to tap into the collective wisdom andexperience of all users. This is particularly the case for women, people ofcolor and underrepresented groups. Because guess what? They are experts when itcomes to hate and disinformation, because they have been the targets of thesecampaigns for so long. And over the years, they've been raising flags, and theyhaven't been listened to. This has got to change. So could we build a Wikipediafor trust? Could we find a way that users can actually provide insights? Theycould offer insights around difficult content-moderation decisions. They couldprovide feedback when platforms decide they want to roll out new changes.

所以缺少的環(huán)節(jié)是我們。是每天使用這些技術(shù)的那些人。我們能不能設(shè)計(jì)一個(gè)新的基礎(chǔ)設(shè)施來支持高質(zhì)量信息?我相信我們可以,我已經(jīng)有了一些想法,關(guān)于我們實(shí)際上可以做什么。所以首先,如果認(rèn)真考慮讓公眾參與進(jìn)來,我們可以從維基百科汲取一些靈感嗎?他們已經(jīng)向我們展示了可能的方法。是的,它并不完美,但是他們已經(jīng)用正確的結(jié)構(gòu),全球的視野和很高很高的透明度證明了你們可以建立一些將贏得大多數(shù)人信任的東西。因?yàn)槲覀儽仨氄业揭环N方法,充分利用集體的智慧和所有用戶的經(jīng)驗(yàn)。婦女,有色人種和未能充分代表大眾的群體尤其如此。猜猜為什么?他們是仇恨和虛假信息方面的專家,因?yàn)樗麄兒芫靡詠矶际沁@些信息運(yùn)動(dòng)的目標(biāo)。多年來,他們一直搖旗吶喊,但是從來沒有被聽見。這必須改變。所以我們能否為信任創(chuàng)建一個(gè)維基百科?我們能否找到一種用戶可以真正提供見解的方法?他們可以對(duì)有難度的內(nèi)容審核決定提出見解。當(dāng)平臺(tái)決定要推出新變更時(shí),他們可以提出反饋。

09:28

Second, people's experiences with theinformation is personalized. My Facebook news feed is very different to yours.Your YouTube recommendations are very different to mine. That makes itimpossible for us to actually examine what information people are seeing. Socould we imagine developing some kind of centralized open repository foranonymized data, with privacy and ethical concerns built in? Because imaginewhat we would learn if we built out a global network of concerned citizens whowanted to donate their social data to science. Because we actually know verylittle about the long-term consequences of hate and disinformation on people'sattitudes and behaviors. And what we do know, most of that has been carried outin the US, despite the fact that this is a global problem. We need to work onthat, too.

第二,人們對(duì)信息的體驗(yàn)是個(gè)性化的。我臉書上的新聞推薦與你們的非常不同。你們的 YouTube 推薦與我的也很不同。這使得我們無法實(shí)際檢查大家看到的是什么信息。那么,我們是否可以想象為匿名數(shù)據(jù)開發(fā)某種集中式開放存儲(chǔ)庫(kù),并內(nèi)置隱私和道德問題?因?yàn)橄胂笠幌?,如果我們建立一個(gè)由關(guān)心且憂慮的公民組成的全球網(wǎng)絡(luò),他們希望將其社交數(shù)據(jù)捐贈(zèng)給科學(xué),那么我們將學(xué)到什么?因?yàn)槲覀儗?shí)際上對(duì)仇恨和虛假信息對(duì)人們態(tài)度和行為產(chǎn)生的長(zhǎng)期后果知之甚少。而我們知道的是,其中大部分是在美國(guó)進(jìn)行的,盡管這是一個(gè)全球性問題。我們也需要為此努力。

10:16

And third, can we find a way to connect thedots? No one sector, let alone nonprofit, start-up or government, is going tosolve this. But there are very smart people right around the world working onthese challenges, from newsrooms, civil society, academia, activist groups. Andyou can see some of them here. Some are building out indicators of contentcredibility. Others are fact-checking, so that false claims, videos and imagescan be down-ranked by the platforms.

第三點(diǎn),我們可以找到方法來連接個(gè)體嗎?沒有一個(gè)部門可以解決這個(gè)問題,更不用說非營(yíng)利組織,初創(chuàng)企業(yè)或政府部門了。但是世界各地有那些非常聰明的人們?cè)趹?yīng)對(duì)這些挑戰(zhàn),包括新聞編輯部,民間社會(huì)組織,學(xué)術(shù)界和維權(quán)組織。你們?cè)谶@可以看見其中的一些。有些正在建立內(nèi)容可信度的指標(biāo)。其他人在做事實(shí)核查,以至于虛假聲明,視頻和圖像可以被平臺(tái)撤下。

10:41

A nonprofit I helped to found, First Draft,is working with normally competitive newsrooms around the world to help thembuild out investigative, collaborative programs. And Danny Hillis, a softwarearchitect, is designing a new system called The Underlay, which will be arecord of all public statements of fact connected to their sources, so thatpeople and algorithms can better judge what is credible. And educators around theworld are testing different techniques for finding ways to make people criticalof the content they consume. All of these efforts are wonderful, but they'reworking in silos, and many of them are woefully underfunded.

我協(xié)助建立的一個(gè)非營(yíng)利組織,名叫“初稿”(First Draft),正在與世界各地通常競(jìng)爭(zhēng)激烈的新聞編輯室合作,以幫助他們建立調(diào)查性協(xié)作項(xiàng)目。丹尼·希利斯,一個(gè)軟件設(shè)計(jì)師,正在設(shè)計(jì)一個(gè)叫做 The Underlay 的新系統(tǒng),它將記錄所有與其來源相連接的公開事實(shí)陳述,以便人們和算法可以更好地判斷什么是可信的。而且世界各地的教育者在測(cè)試不同的技術(shù),以找到能使人們對(duì)所看到內(nèi)容產(chǎn)生批判的方法。所有的這些努力都很棒,但是他們埋頭各自為戰(zhàn),而且很多都嚴(yán)重資金不足。

11:18

There are also hundreds of very smartpeople working inside these companies, but again, these efforts can feeldisjointed, because they're actually developing different solutions to the sameproblems.

在這些公司內(nèi)部也有成百上千的聰明人在努力,但是同樣,這些努力讓人感到不夠連貫,因?yàn)樗麄冋跒橥瑯拥膯栴}建立不同的解決方案。

11:29

How can we find a way to bring peopletogether in one physical location for days or weeks at a time, so they canactually tackle these problems together but from their different perspectives?So can we do this? Can we build out a coordinated, ambitious response, one thatmatches the scale and the complexity of the problem? I really think we can.Together, let's rebuild our information commons.

我們?cè)趺茨苷业揭环N方法,把這些人同時(shí)聚集在同一個(gè)地點(diǎn)幾天或幾周,這樣他們可以真正從不同角度共同解決這些問題?那么我們能做到嗎?我們能否建立一種協(xié)調(diào)一致,雄心勃勃的應(yīng)對(duì)措施,使其與問題的規(guī)模和復(fù)雜性相匹配?我真的認(rèn)為我們可以。加入我們,重建我們的信息共享吧。

11:52

Thank you.

謝謝。

11:54

(Applause)

掌聲

用戶搜索

瘋狂英語(yǔ) 英語(yǔ)語(yǔ)法 新概念英語(yǔ) 走遍美國(guó) 四級(jí)聽力 英語(yǔ)音標(biāo) 英語(yǔ)入門 發(fā)音 美語(yǔ) 四級(jí) 新東方 七年級(jí) 賴世雄 zero是什么意思南京市南湖邊小區(qū)英語(yǔ)學(xué)習(xí)交流群

  • 頻道推薦
  • |
  • 全站推薦
  • 推薦下載
  • 網(wǎng)站推薦