行業(yè)英語 學英語,練聽力,上聽力課堂! 注冊 登錄
> 行業(yè)英語 > 金融英語 > 金融時報原文閱讀 >  第276篇

向機器人求個職

所屬教程:金融時報原文閱讀

瀏覽:

2020年06月29日

手機版
掃描二維碼方便學習和分享

向機器人求個職

機器人并不只搶走人類的工作,它們也開始招聘人類員工了,因為它們可以快速篩選應聘者,但這很危險。

測試中可能遇到的詞匯和知識:

the air is thick with空氣里彌漫著

seductive有魅力的;性感的[s?'d?kt?v]

inherently內在地;固有地;天性地[?n?h??r?ntl?]

bias偏見;偏愛;斜紋;乖離率['ba??s]

ethnicity種族劃分[eθ'n?s?t?]

proxy代理人;委托書 ['pr?ks?]

scenario劇本;設想[s?'nɑ?r???]

murky黑暗的;朦朧的;陰郁的['m??k?]

The risks of relying on robots for fairer staff recruitment(630 words)

By Sarah O’Connor

Robots are not just taking people’s jobs away,they are beginning to hand them out,too. Go to any recruitment industry event and you will find the air is thick with terms like“machine learning”,“big data”and“predictive analytics”.

The argument for using these tools in recruitment is simple. Robo-recruiters can sift through thousands of job candidates far more efficiently than humans. They can also do it more fairly. Since they do not harbour conscious or unconscious human biases,they will recruit a more diverse and meritocratic workforce.

This is a seductive idea but it is also dangerous. Algorithms are not inherently neutral just because they see the world in zeros and ones.

For a start,any machine learning algorithm is only as good as the training data from which it learns. Take the PhD thesis of academic researcher Colin Lee,released to the press this year. He analysed data on the success or failure of 441,769 job applications and built a model that could predict with 70 to 80 per cent accuracy which candidates would be invited to interview. The press release plugged this algorithm as a potential tool to screen a large number of CVs while avoiding“human error and unconscious bias”.

But a model like this would absorb any human biases at work in the original recruitment decisions. For example,the research found that age was the biggest predictor of being invited to interview,with the youngest and the oldest applicants least likely to be successful. You might think it fair enough that inexperienced youngsters do badly,but the routine rejection of older candidates seems like something to investigate rather than codify and perpetuate.

Mr Lee acknowledges these problems and suggests it would be better to strip the CVs of attributes such as gender,age and ethnicity before using them. Even then,algorithms can wind up discriminating. In a paper published this year,academics Solon Barocas and Andrew Selbst use the example of an employer who wants to select those candidates most likely to stay for the long term. If the historical data show women tend to stay in jobs for a significantly shorter time than men(possibly because they leave when they have children),the algorithm will probably discriminate against them on the basis of attributes that are a reliable proxy for gender.

Or how about the distance a candidate lives from the office? That might well be a good predictor of attendance or longevity at the company; but it could also inadvertently discriminate against some groups,since neighbourhoods can have different ethnic or age profiles.

These scenarios raise the tricky question of whether it is wrong to discriminate even when it is rational and unintended. This is murky legal territory. In the US,the doctrine of“disparate impact”outlaws ostensibly neutral employment practices that disproportionately harm“protected classes”,even if the employer does not intend to discriminate. But employers can successfully defend themselves if they can prove there is a strong business case for what they are doing. If the intention of the algorithm is simply to recruit the best people for the job,that may be a good enough defence.

Still,it is clear that employers who want a more diverse workforce cannot assume that all they need to do is turn over recruitment to a computer. If that is what they want,they will need to use data more imaginatively.

Instead of taking their own company culture as a given and looking for the candidates statistically most likely to prosper within it,for example,they could seek out data about where(and in which circumstances) a more diverse set of workers thrive.

Machine learning will not propel your workforce into the future if the only thing it learns from is your past.

1.What is not the reason of using these robots in recruitment?

A. sift job candidates more efficiently

B. take a more procedural approach to save time

C. sift job candidates more fairly

D. recruit a more diverse and meritocratic workforce

答案(1)

2.Which one is not right about relying on robots for fairer staff recruitment as mentioned?

A. algorithms are inherently neutral

B. it is seductive but dangerous

C. robots see the world in zeros and ones

D. machine learning algorithm is only as good as the training data from which it learns

答案(2)

3.What was the biggest predictor of being invited to interview of Colin Lee’s research?

A. gender

B. ethnicity

C. age

D. education

答案(3)

4.What should employers do if they want a more diverse workforce by computer recruitment?

A. use data more accurately

B. use data more imaginatively

C. gather more data

D. strip the CVs of attributes such as gender,age and ethnicity

答案(4)

(1) 答案:B.take a more procedural approach to save time

解釋:機器人招聘者可以快速篩選數(shù)以千計的應聘者,效率遠高于人類。它們還能做到更加公平。因為它們不會像人類那樣帶著有意或無意的偏見,它們會招聘到一批更多元化和擇優(yōu)錄用的員工。

(2) 答案:A.algorithms are inherently neutral

解釋:這是個很誘人的想法,但也是危險的。算法的中立并非是其固有,而是因為它們看到的世界只是“0”和“1”。任何機器學習的算法,并不會比它所學習的訓練數(shù)據(jù)更好。

(3) 答案:C.age

解釋:研究發(fā)現(xiàn),年齡因素可以在最大程度上預測該應聘者是否會被邀請面試,最年輕和最年長的應聘者最不可能成功。

(4) 答案:B.use data more imaginatively

解釋:那些希望把招聘交給電腦去做,又要擁有更多元化的員工隊伍的雇主,應該把數(shù)據(jù)運用得更富想象力一些。

用戶搜索

瘋狂英語 英語語法 新概念英語 走遍美國 四級聽力 英語音標 英語入門 發(fā)音 美語 四級 新東方 七年級 賴世雄 zero是什么意思常州市弘陽廣場英語學習交流群

網(wǎng)站推薦

英語翻譯英語應急口語8000句聽歌學英語英語學習方法

  • 頻道推薦
  • |
  • 全站推薦
  • 推薦下載
  • 網(wǎng)站推薦