行業(yè)英語 學(xué)英語,練聽力,上聽力課堂! 注冊(cè) 登錄
> 行業(yè)英語 > 金融英語 > 金融時(shí)報(bào)原文閱讀 >  第620篇

金融時(shí)報(bào):假新聞的終極殺手锏?

所屬教程:金融時(shí)報(bào)原文閱讀

瀏覽:

2022年01月25日

手機(jī)版
掃描二維碼方便學(xué)習(xí)和分享

假新聞的終極殺手锏?

斯坦福大學(xué)的研究人員發(fā)明了Face2Face技術(shù),能夠輕松地偽造某個(gè)人的視頻和音頻。這種新技術(shù)引發(fā)了人們的擔(dān)憂——或許在不久的將來,某人可以輕松地偽造出美國(guó)總統(tǒng)宣戰(zhàn)的講話視頻,而白宮恐怕很難及時(shí)作出反應(yīng)。這一技術(shù)會(huì)成為假新聞的終極殺手锏嗎?

測(cè)試中可能遇到的詞匯和知識(shí):

snippet['sn?p?t] 小片,小部分

febrile ['fi?bra?l] 發(fā)燒的, 熱病的

incendiary[?n'sendi?ri] 煽動(dòng)的, 教唆的

nefarious[n?'fe?ri?s] 違法的, 邪惡的

disconcert [?d?sk?n's??t] 使困惑, 破壞

bogus['b??ɡ?s] 假的, 偽造的

The ultimate fake news scenario(704 words)

By Anjana Ahuja

Imagine looking in a mirror and seeing not your own reflection but that of Donald Trump. Each time you contort your face, you simultaneously contort his. You smile, he smiles. You scowl, he scowls. You control, in real time, the face of the president of the US.

That is the sinister potential of Face2Face, a technology developed by researchers at Stanford University in California that allows someone to transpose their facial gestures on to the video of someone else.

Now imagine marrying that “facial re-enactment” technology to artfully snipped audio clips of the president's previous public pronouncements. You post your creation on YouTube: a convincing snippet of Mr Trump declaring nuclear war against North Korea. In the current febrile climate, the incendiary video might well go viral before the White House can scramble a denial.

It is the ultimate fake news scenario but not an inconceivable one: scientists have already demonstrated the concept by altering YouTube videos of George HW Bush, Barack Obama and Vladimir Putin.

Now Darpa, the Defense Advanced Research Projects Agency in the US, has embarked on a research programme called MediFor (short for media forensics). Darpa says its programme is about levelling a field that “currently favours the manipulator”, a nefarious advantage that becomes a national security concern if the goal of forgery is propaganda or misinformation.

The five-year programme is intended to turn out a system capable of analysing hundreds of thousands of images a day and immediately assessing if they have been tampered with. Professor Hany Farid, a computer scientist at Dartmouth College, New Hampshire, is among the academics involved. He specialises in detecting the manipulation of images, and his work includes assignments for law enforcement agencies and media organisations.

“I've now seen the technology get good enough that I'm very concerned,” Prof Farid told Nature last week. “At some point, we will reach a stage when we can generate realistic video, with audio, of a world leader, and that's going to be very disconcerting.” He describes the attempt to keep up with the manipulators as a technological arms race.

At the moment, spotting fakery takes time and expert knowledge, meaning that the bulk of bogus pictures slip into existence unchallenged. The first step with a questionable picture is to feed it into a reverse image search, such as Google Image Search, which will retrieve the picture if it has appeared elsewhere (this has proven surprisingly useful in uncovering scientific fraud, in instances when authors have plagiarised graphs).

Photographs can be scrutinised for unusual edges or disturbances in colour. A colour image is composed of single, one-colour pixels. The lone dots are combined in particular ways to create the many hues and shading in a photograph. Inserting another image, or airbrushing something out, disrupts that characteristic pixellation. Shadows are another giveaway. Professor Farid cites a 2012 viral video of an eagle snatching a child: his speedy analysis revealed inconsistent shadows, exposing the film as a computer-generated concoction.

Researchers at Massachusetts Institute of Technology have also developed an ingenious method of determining whether the people in video clips are real or animated. By magnifying video clips and checking colour differences in a person's face, they can deduce whether the person has a pulse. Interestingly, some legal experts have argued that computer-generated child pornography should be covered by the First Amendment, which protects free speech. Cases have turned on experts being able to detect whether offending material contains live victims.

Machine learning is aiding the fraudsters: determined fakers can build “generative adversarial networks”. A GAN is a sort of Jekyll-and-Hyde network that, on the one hand, generates images and, on the other, rejects those that do not measure up authentically against a library of images. The result is a machine with its own inbuilt devil's advocate, able to teach itself how to generate hard-to-spot fakes.

Not all artifice, however, is malevolent: two students built a program capable of producing art that looks like …art. Their source was the WikiArt database of 100,000 paintings: the program, GANGogh, has since generated creations that would not look out of place on a millionaire's wall.

Such is the epic reach of digital duplicity: it threatens not only to disrupt politics and destabilise the world order, but also to reframe our ideas about art.

請(qǐng)根據(jù)你所讀到的文章內(nèi)容,完成以下自測(cè)題目:

1.What is the purpose of the first paragraph in the passage ?

A.To indicate the danger of facial re-enactment technology.

B.To raise awareness of the new fake news scenario.

C.To explain how the facial re-enactment technology works.

D.To demonstrated the link between technology and politician.

答案(1)

2.Which of the following statements about Face2Face is true ?

A.It has caused widespread worry about the ultimate fake news scenario.

B.It was developed by researchers at Massachusetts Institute of Technology.

C.It has gone viral on social media especially on YouTube since released.

D.It is widely used for generating realistic video of the president's pronouncements.

答案(2)

3.What is MediFor according to the article ?

A.A new technology which can identify video forgery effectively.

B.A research programme which develops image forgery detection techniques.

C.A YouTube channel which alters videos of famous world leaders.

D.An agency which responsible for the development of emerging technologies.

答案(3)

4.Which of the following methods cannot be used for spotting fake pictures ?

A.Feeding pictures into a reverse image search.

B.Scrutinising pictures for unusual edges or disturbances in colour.

C.Searching for inconsistent shadows in questionable pictures.

D.Inserting another image or airbrushing something out.

答案(4)

* * *

(1)答案:C.To explain how the facial re-enactment technology works.

解釋:作者在第一段提到了在想象中偽造特朗普的影像,解釋了Face2Face技術(shù)的功能。

(2)答案:A.It has caused widespread worry about the ultimate fake news scenario.

解釋:人們開始擔(dān)心這種新技術(shù)會(huì)催生完善的新聞造假技術(shù)。

(3)答案:B.A research programme which develops image forgery detection techniques.

解釋:MediFor是一個(gè)為期五年的研究項(xiàng)目,主要研究能夠快速檢查并識(shí)別經(jīng)過處理的圖片的技術(shù)。

(4)答案:D.Inserting another image or airbrushing something out.

解釋:目前,想辨別出偽造的照片,我們可以借助圖片反向搜索,或?qū)ふ艺掌喜蛔匀坏木€條和混亂的色彩,也可以檢查影子是否有問題。


用戶搜索

瘋狂英語 英語語法 新概念英語 走遍美國(guó) 四級(jí)聽力 英語音標(biāo) 英語入門 發(fā)音 美語 四級(jí) 新東方 七年級(jí) 賴世雄 zero是什么意思重慶市松翠路移民小區(qū)英語學(xué)習(xí)交流群

網(wǎng)站推薦

英語翻譯英語應(yīng)急口語8000句聽歌學(xué)英語英語學(xué)習(xí)方法

  • 頻道推薦
  • |
  • 全站推薦
  • 推薦下載
  • 網(wǎng)站推薦