<menuitem id="kvf8t"></menuitem>

    <strike id="kvf8t"><label id="kvf8t"><var id="kvf8t"></var></label></strike>
    <ruby id="kvf8t"><del id="kvf8t"></del></ruby>

    <ruby id="kvf8t"></ruby>

    <noframes id="kvf8t"><option id="kvf8t"></option></noframes>

    首頁 500強 活動 榜單 商業 科技 領導力 專題 品牌中心
    雜志訂閱

    谷歌AI道德研究人員離職風波發酵

    Jeremy Kahn
    2020-12-04

    知名人工智能研究員蒂姆尼特?格布魯稱,她因為對谷歌公司的一些技術提出可能的道德問題而被開除。

    文本設置
    小號
    默認
    大號
    Plus(0條)

    由于“批評谷歌缺乏對多元化的承諾”,一位知名人工智能研究人員被解雇,離開了谷歌公司(Google)。谷歌試圖壓制批評及討論的行為再次引社會擔憂。

    谷歌團隊中專注于人工智能道德和算法偏見方面的技術聯合負責人蒂姆尼特?格布魯在其推特中寫道,她被谷歌趕出家門的原因是自己曾經給基礎人工智能研究部門——谷歌大腦中(Google Brain)的“女性和同僚們”寫了一封電子郵件,由此激起了高管們的憤怒。

    在人工智能的研究人員中,格布魯以促進這一領域的多樣性和包容性而出名。她是“公平、責任和透明度”(Fairness, Accountability, and Transparency)會議的創立者之一。該會議致力于解決人工智能的偏見、安全和倫理等問題;她同時也是“人工智能中的黑人”組織(Black in AI)的創立者之一。該組織強調黑人專家在機器學習工作中的地位,并提供專業性的指導,意圖提高人們對黑人計算機科學家及工程師們受到的偏見、歧視的關注。

    12月3日,格布魯向彭博社新聞(Bloomberg News)表示,在被解雇前一周,她曾經因為一篇她與其他六名作者(其中四位來自谷歌)共同撰寫的研究報告而與管理層發生爭執,該報告將于明年提交給學術會議進行審議。她說,谷歌要求她撤回這份報告,至少得抹去她和其他幾位谷歌員工的名字。

    格布魯還向內部員工組織發了電子郵件,抱怨自己受到的待遇,指控谷歌在種族和性別多樣性、平等及包容這些方面上的虛偽行徑。

    格布魯告訴媒體稱,她已經告訴谷歌搜索(Google Research)的副總裁、她的主管之一梅根?卡奇利亞,如果沒有關于報告審閱過程方式的更多討論,就不能確保同樣的事情不會再次發生。

    “我們團隊叫做‘AI道德’(Ethical AI),當然會寫一些AI存在的問題?!彼f。

    格布魯告訴卡奇立亞,如果公司不愿意解決她的問題,她將辭職,在過渡期結束后離開谷歌。谷歌隨后通知稱,不同意她提出的條件,接受她的辭職,并立即生效。格布魯發布的一條推文顯示,谷歌稱格布魯給公司內部人員發送電子郵件的行為反映出其“與谷歌管理層的期望不一致”。

    人工智能行業的其他研究人員也在推特上表達了對格布魯的支持,對她被解雇一事感到憤慨。紐約大學(New York University)的AI Now研究所(AI Now Institute)的主任梅雷迪思?惠特克在推特上寫道:“谷歌對蒂姆尼特進行的報復令人擔憂。蒂姆尼特是這一領域最有智慧、有原則的人工智能公正研究人員之一?!?/p>

    科技公司Mozilla的員工,人工智能公平、倫理和責任領域的另一位研究員德布?拉吉在推特中寫道:“現在,公開反對審查制度的行為,已經‘與谷歌管理層的想法不一致’。格布魯這樣做是因為她愿意冒一切風險來保護她手下為她工作的人,去成為一個谷歌公司中最富有多樣性的團隊?!?/p>

    許多人還提到,就在格布魯離職的同一天,美國國家勞工關系委員會(National Labor Relations Board)指控谷歌非法解雇參與組織了兩場公司內部抗議的員工:其中一次是在2019年,員工抗議谷歌與美國海關和邊境保護局(U.S. Customs and Border Protection)的合作,還有一次是在2018年,抗議谷歌對性騷擾案件的處理不當。

    美國國家勞工關系委員會認為,谷歌是想通過恐嚇、甚至直接解雇員工來鎮壓他們的不滿情緒。

    谷歌目前尚未就美國國家勞工關系委員會的申訴發表任何公開評論,但其針對解雇格布魯事件向《財富》雜志做出了回應,并讓《財富》雜志參考其研究所高級副總裁杰夫?迪恩的一封電子郵件。

    迪恩在郵件中表示,格布魯確實按照公司的要求在兩周內同合著人一起遞交了她們的論文著述,但谷歌內部負責審核的“跨職能團隊”發現其著述“并不符合公司的出版要求”,因為論文“忽視了很多其他相關事實”,格布魯在文中提及的一系列倫理問題“已經在公司內部有所解決”。

    迪恩還表示,格布魯的離職是“一個艱難的決定”,尤其是她正在參與公司重要的研究課題。迪恩強調,自己的部門以及谷歌公司一直以來都深切關注著人工智能研究領域。

    顯然,格布魯事件是一個導火索,可能會再次引爆谷歌公司內外對其技術倫理問題以及員工異議處理方式的擔憂與關注。這家曾經以“自由的企業文化”聞名的科技公司如今似乎日漸背離初心,每當觸及聲譽或是盈利問題之時,總是習慣于讓員工噤聲。

    美國科技新聞網Platformer還曝光了一些格布魯發給同事的郵件。在郵件中,格布魯諷刺谷歌早前關于“性別平等”的承諾只是空穴來風,新進員工只有14%是女性。(但她沒有明確表明這14%是指谷歌的研究部門還是其他所有部門。)她認為,谷歌所謂的“多元、包容”,只是紙上談兵。格布魯還呼吁和她一樣想要求變的員工,多向外界尋求幫助,以此向公司施加壓力。

    提及論文,格布魯在郵件中向同事表示,她早已告知谷歌的公關團隊自己會在截稿前兩個月開始動筆,同時她也已經將手頭的論文分發給了其他30多位研究人員來征求反饋。

    此外,格布魯還在好幾條推特中暗示過自己對市場上在研人工智能軟件的道德性擔憂。以谷歌的大數據語言模型為例,這類人工智能算法的確有效地優化了谷歌的現有的翻譯及搜索結果,幫助谷歌在自然語言處理方面取得了許多突破,但其算法訓練過程中所采用的大量網頁或書本用語數據往往又蘊含了潛在的性別不平等及種族歧視觀念,很可能會導致人工智能錯誤學習。

    在12月3日的一條推特中,格布魯特意@了副總裁迪恩,稱她下一步便會致力于研究谷歌語言模型算法學習過程中的文化歧視現象?!癅迪恩,我現在意識到語言模型算法對你有多重要了,但我不希望之后還有和我一樣的人遭受同樣的待遇?!?/p>

    本周早些時候,格布魯表示谷歌內部管理人員正在干預她的工作,企圖向外界掩蓋谷歌人工智能算法中的道德風險?!耙话闩e報人都會受到機構的安全庇護,為什么‘AI道德’的研究人員就不能被保護呢?如果研究人員都要面臨審查和恐嚇,那大眾又憑什么相信我們的研究成果呢?”格布魯在其12月1日發布的推特上寫道。(財富中文網)

    編譯:楊二一、陳怡軒

    由于“批評谷歌缺乏對多元化的承諾”,一位知名人工智能研究人員被解雇,離開了谷歌公司(Google)。谷歌試圖壓制批評及討論的行為再次引社會擔憂。

    谷歌團隊中專注于人工智能道德和算法偏見方面的技術聯合負責人蒂姆尼特?格布魯在其推特中寫道,她被谷歌趕出家門的原因是自己曾經給基礎人工智能研究部門——谷歌大腦中(Google Brain)的“女性和同僚們”寫了一封電子郵件,由此激起了高管們的憤怒。

    在人工智能的研究人員中,格布魯以促進這一領域的多樣性和包容性而出名。她是“公平、責任和透明度”(Fairness, Accountability, and Transparency)會議的創立者之一。該會議致力于解決人工智能的偏見、安全和倫理等問題;她同時也是“人工智能中的黑人”組織(Black in AI)的創立者之一。該組織強調黑人專家在機器學習工作中的地位,并提供專業性的指導,意圖提高人們對黑人計算機科學家及工程師們受到的偏見、歧視的關注。

    12月3日,格布魯向彭博社新聞(Bloomberg News)表示,在被解雇前一周,她曾經因為一篇她與其他六名作者(其中四位來自谷歌)共同撰寫的研究報告而與管理層發生爭執,該報告將于明年提交給學術會議進行審議。她說,谷歌要求她撤回這份報告,至少得抹去她和其他幾位谷歌員工的名字。

    格布魯還向內部員工組織發了電子郵件,抱怨自己受到的待遇,指控谷歌在種族和性別多樣性、平等及包容這些方面上的虛偽行徑。

    格布魯告訴媒體稱,她已經告訴谷歌搜索(Google Research)的副總裁、她的主管之一梅根?卡奇利亞,如果沒有關于報告審閱過程方式的更多討論,就不能確保同樣的事情不會再次發生。

    “我們團隊叫做‘AI道德’(Ethical AI),當然會寫一些AI存在的問題?!彼f。

    格布魯告訴卡奇立亞,如果公司不愿意解決她的問題,她將辭職,在過渡期結束后離開谷歌。谷歌隨后通知稱,不同意她提出的條件,接受她的辭職,并立即生效。格布魯發布的一條推文顯示,谷歌稱格布魯給公司內部人員發送電子郵件的行為反映出其“與谷歌管理層的期望不一致”。

    人工智能行業的其他研究人員也在推特上表達了對格布魯的支持,對她被解雇一事感到憤慨。紐約大學(New York University)的AI Now研究所(AI Now Institute)的主任梅雷迪思?惠特克在推特上寫道:“谷歌對蒂姆尼特進行的報復令人擔憂。蒂姆尼特是這一領域最有智慧、有原則的人工智能公正研究人員之一?!?/p>

    科技公司Mozilla的員工,人工智能公平、倫理和責任領域的另一位研究員德布?拉吉在推特中寫道:“現在,公開反對審查制度的行為,已經‘與谷歌管理層的想法不一致’。格布魯這樣做是因為她愿意冒一切風險來保護她手下為她工作的人,去成為一個谷歌公司中最富有多樣性的團隊?!?/p>

    許多人還提到,就在格布魯離職的同一天,美國國家勞工關系委員會(National Labor Relations Board)指控谷歌非法解雇參與組織了兩場公司內部抗議的員工:其中一次是在2019年,員工抗議谷歌與美國海關和邊境保護局(U.S. Customs and Border Protection)的合作,還有一次是在2018年,抗議谷歌對性騷擾案件的處理不當。

    美國國家勞工關系委員會認為,谷歌是想通過恐嚇、甚至直接解雇員工來鎮壓他們的不滿情緒。

    谷歌目前尚未就美國國家勞工關系委員會的申訴發表任何公開評論,但其針對解雇格布魯事件向《財富》雜志做出了回應,并讓《財富》雜志參考其研究所高級副總裁杰夫?迪恩的一封電子郵件。

    迪恩在郵件中表示,格布魯確實按照公司的要求在兩周內同合著人一起遞交了她們的論文著述,但谷歌內部負責審核的“跨職能團隊”發現其著述“并不符合公司的出版要求”,因為論文“忽視了很多其他相關事實”,格布魯在文中提及的一系列倫理問題“已經在公司內部有所解決”。

    迪恩還表示,格布魯的離職是“一個艱難的決定”,尤其是她正在參與公司重要的研究課題。迪恩強調,自己的部門以及谷歌公司一直以來都深切關注著人工智能研究領域。

    顯然,格布魯事件是一個導火索,可能會再次引爆谷歌公司內外對其技術倫理問題以及員工異議處理方式的擔憂與關注。這家曾經以“自由的企業文化”聞名的科技公司如今似乎日漸背離初心,每當觸及聲譽或是盈利問題之時,總是習慣于讓員工噤聲。

    美國科技新聞網Platformer還曝光了一些格布魯發給同事的郵件。在郵件中,格布魯諷刺谷歌早前關于“性別平等”的承諾只是空穴來風,新進員工只有14%是女性。(但她沒有明確表明這14%是指谷歌的研究部門還是其他所有部門。)她認為,谷歌所謂的“多元、包容”,只是紙上談兵。格布魯還呼吁和她一樣想要求變的員工,多向外界尋求幫助,以此向公司施加壓力。

    提及論文,格布魯在郵件中向同事表示,她早已告知谷歌的公關團隊自己會在截稿前兩個月開始動筆,同時她也已經將手頭的論文分發給了其他30多位研究人員來征求反饋。

    此外,格布魯還在好幾條推特中暗示過自己對市場上在研人工智能軟件的道德性擔憂。以谷歌的大數據語言模型為例,這類人工智能算法的確有效地優化了谷歌的現有的翻譯及搜索結果,幫助谷歌在自然語言處理方面取得了許多突破,但其算法訓練過程中所采用的大量網頁或書本用語數據往往又蘊含了潛在的性別不平等及種族歧視觀念,很可能會導致人工智能錯誤學習。

    在12月3日的一條推特中,格布魯特意@了副總裁迪恩,稱她下一步便會致力于研究谷歌語言模型算法學習過程中的文化歧視現象?!癅迪恩,我現在意識到語言模型算法對你有多重要了,但我不希望之后還有和我一樣的人遭受同樣的待遇?!?/p>

    本周早些時候,格布魯表示谷歌內部管理人員正在干預她的工作,企圖向外界掩蓋谷歌人工智能算法中的道德風險?!耙话闩e報人都會受到機構的安全庇護,為什么‘AI道德’的研究人員就不能被保護呢?如果研究人員都要面臨審查和恐嚇,那大眾又憑什么相信我們的研究成果呢?”格布魯在其12月1日發布的推特上寫道。(財富中文網)

    編譯:楊二一、陳怡軒

    A prominent A.I. researcher has left Google, saying she was fired for criticizing the company’s lack of commitment to diversity, renewing concerns about the company’s attempts to silence criticism and debate.

    Timnit Gebru, who was technical co-lead of a Google team that focused on A.I. ethics and algorithmic bias, wrote on Twitter that she had been pushed out of the company for writing an email to “women and allies” at Google Brain, the company’s division devoted to fundamental A.I. research, that drew the ire of senior managers.

    Gebru is well-known among A.I. researchers for helping to promote diversity and inclusion within the field. She cofounded the Fairness, Accountability, and Transparency (FAccT) conference, which is dedicated to issues around A.I. bias, safety, and ethics. She also cofounded the group Black in AI, which highlights the work of Black machine learning experts as well as offering mentorship. The group has sought to raise awareness of bias and discrimination against Black computer scientists and engineers.

    The researcher told Bloomberg News on December 3 that her firing came after a week in which she had wrangled with her managers over a research paper she had co-written with six other authors, including four from Google, that was about to be submitted for consideration at an academic conference next year. She said Google had asked her to either retract the paper or at least remove the her name and those of the other Google employees, she told Bloomberg.

    She also posted an email to the internal employee group complaining about her treatment and accusing Google of being disingenuous in its commitment to racial and gender diversity, equity and inclusivity.

    Gebru told the news service that she had told Megan Kacholia, Google Research's vice president and one of her supervisors, that without more discussion about the way the review process for the paper had been handled and clear guarantees that the same thing wouldn't happen again.

    “We are a team called Ethical AI, of course we are going to be writing about problems in AI,” Gebru said.

    She told Bloomberg that she had told Kacholia that if the company was unwilling to address her concerns, she would resign and leave following a transition period. The company then told her it would not agree to her conditions and that it was accepting her resignation effective immediately. It said Gebru's decision to email the internal listserv reflected "behavior that is inconsistent with the expectations of a Google manager," according to a Tweet Gebru posted.

    Fellow A.I. researchers took to Twitter to express support for Gebru and outrage at her apparent firing. “Google’s retaliation against Timnit—one of the brightest and most principled AI justice researchers in the field—is *alarming*,” Meredith Whittaker, faculty director at the AI Now Institute at New York University, wrote on Twitter.

    “Speaking out against censorship is now ‘inconsistent with the expectations of a Google manager.’ She did that because she cares more and will risk everything to protect those she has hired to work under her—a team that happens to be more diverse than any other at Google,” Deb Raji, another researcher who specializes in A.I. fairness, ethics, and accountability and who works at Mozilla, wrote in a Twitter post.

    Many noted that Gebru’s departure came on the same day the National Labor Relations Board accused Google of illegally dismissing workers who helped organize two companywide protests: one, in 2019, against the company’s work with the U.S. Customs and Border Protection agency and a 2018 walkout to demonstrate against the company’s handling of sexual harassment cases.

    The NLRB accused Google of using “terminations and intimidation in order to quell workplace activism.”

    Google has not issued a public comment on the NLRB case. In reference to questions about Gebru's case, it referred Fortune to an email from Jeff Dean, Google's senior vice president for research, that was obtained by the technology news site Platformer.

    In that email, Dean said Gebru and her co-authors had submitted the paper for internal review within the two week period the company requires and that an internal "cross-functional team" of reviewers had found "it didn't meet our bar for publication," and "ignored too much relevant research" that showed some of the ethical issues she and her co-authors were raising had at least been partially mitigated.

    Dean said Gebru's departure was "a difficult moment, especially given the important research topics she was involved in, and how deeply we care about responsible AI research as an org and as a company."

    The incident is likely to renew concerns both inside and outside the company about the ethics of its technology and how it deals with employee dissent. Once known for its freewheeling and liberal corporate culture, Google has increasingly sought to limit employee speech, particularly when it touches on issues likely to embarrass the company or potentially impact its ability to secure lucrative work for various government agencies.

    Platformer also obtained and published what it said was the email Gebru had sent to colleagues. In it, she criticizes the company’s commitment to diversity, saying that “this org seems to have hired only 14% or so women this year.” (She does not make it clear if that figure is for all of Google Research or some other entity.) She also accuses the company of paying lip service to diversity and inclusion efforts and advises those who want the company to change to seek ways to bring external pressure to bear on Google.

    Gebru says in the email that she had informed Google's public relations and policy team of her intent to write the paper at least two months before the submission deadline and that she had already circulated it to more than 30 other researchers for feedback.

    Gebru implied in several tweets that she had raised ethical concerns about some of the company’s A.I. software, including its large language models. This kind of A.I. software is responsible for many breakthroughs in natural language processing, including Google’s improved translation and search results, but has been shown to incorporate gender and racial bias from the large amounts of Internet pages and books that are used to train it.

    In tweets on December 3, she singled out Dean, a storied figure among many computer engineers and researchers as one of the original coders of Google’s search engine, and implied that she had been planning to look at bias in Google’s large language models. “@JeffDean I realize how much large language models are worth to you now. I wouldn’t want to see what happens to the next person who attempts this,” she wrote.

    Earlier in the week, Gebru had also implied Google managers were attempting to censor her work or bury her concerns about ethical issues in the company’s A.I. systems. “Is there anyone working on regulation protecting Ethical AI researchers, similar to whistleblower protection? Because with the amount of censorship & intimidation that goes on towards people in specific groups, how does anyone trust any real research in this area can take place?” she wrote in a Twitter post on Dec. 1.

    財富中文網所刊載內容之知識產權為財富媒體知識產權有限公司及/或相關權利人專屬所有或持有。未經許可,禁止進行轉載、摘編、復制及建立鏡像等任何使用。
    0條Plus
    精彩評論
    評論

    撰寫或查看更多評論

    請打開財富Plus APP

    前往打開
    熱讀文章
    色视频在线观看无码|免费观看97干97爱97操|午夜s级女人优|日本a∨视频
    <menuitem id="kvf8t"></menuitem>

    <strike id="kvf8t"><label id="kvf8t"><var id="kvf8t"></var></label></strike>
    <ruby id="kvf8t"><del id="kvf8t"></del></ruby>

    <ruby id="kvf8t"></ruby>

    <noframes id="kvf8t"><option id="kvf8t"></option></noframes>