Artificial intelligence built on mountains of potentially biased information has created a real risk of automating discrimination, but is there any way to re-educate the machines?
建立在大量潛在偏見信息之上的人工智能確實帶來了自動歧視的風險,但有什么方法可以對機器進行重新教育呢?
The question for some is extremely urgent. In this ChatGPT era, AI will generate more and more decisions for health care providers, bank lenders or lawyers, using whatever was scoured from the internet as source material.
對于一些人來說,這個問題非常緊迫。 在這個 ChatGPT 時代,人工智能將使用從互聯(lián)網(wǎng)上搜索到的任何內(nèi)容作為源材料,為醫(yī)療保健提供者、銀行貸款機構(gòu)或律師做出越來越多的決策。
AI's underlying intelligence, therefore, is only as good as the world it came from, as likely to be filled with wit, wisdom, and usefulness, as well as hatred, prejudice and rants.
因此,人工智能的潛在智能取決于它所來自的世界,可能充滿機智、智慧和有用性,也可能充滿仇恨、偏見和咆哮。
"It's dangerous because people are embracing and adopting AI software and really depending on it," said Joshua Weaver, Director of Texas Opportunity & Justice Incubator, a legal consultancy.
“這很危險,因為人們正在擁抱和采用人工智能軟件,并且真正依賴它,”法律咨詢公司德克薩斯機會與正義孵化器主任約書亞韋弗說。
"We can get into this feedback loop where the bias in our own selves and culture informs bias in the AI and becomes a sort of reinforcing loop," he said.
他說:“我們可以進入這個反饋循環(huán),我們自己和文化的偏見會影響人工智能的偏見,并成為一種強化循環(huán)。”
Making sure technology more accurately reflects human diversity is not just a political choice.
確保技術(shù)更準確地反映人類多樣性不僅僅是一個政治選擇。
ChatGPT-style generative AI, which can create a semblance of human-level reasoning in just seconds, opens up new opportunities to get things wrong, experts worry.
專家擔心,ChatGPT 式的生成人工智能可以在短短幾秒鐘內(nèi)創(chuàng)造出人類水平的推理能力,這為出錯的機會提供了新的機會。
The AI giants are well aware of the problem, afraid that their models can descend into bad behavior, or overly reflect a western society when their user base is global.
人工智能巨頭很清楚這個問題,擔心他們的模型可能會陷入不良行為,或者當他們的用戶群遍布全球時過度反映西方社會。
The huge models on which ChatGPT is built "can't reason about what is biased or what isn't so they can't do anything about it," cautioned Jayden Ziegler, head of product at Alembic Technologies.
Alembic Technologies 產(chǎn)品主管 Jayden Ziegler 警告說,ChatGPT 所基于的巨大模型“無法推理出什么是有偏見的,什么是沒有偏見的,所以他們對此無能為力”。
For now at least, it is up to humans to ensure that the AI generates whatever is appropriate or meets their expectations.
至少目前,人類有責任確保人工智能生成合適的或滿足他們期望的東西。