<menuitem id="kvf8t"></menuitem>

    <strike id="kvf8t"><label id="kvf8t"><var id="kvf8t"></var></label></strike>
    <ruby id="kvf8t"><del id="kvf8t"></del></ruby>

    <ruby id="kvf8t"></ruby>

    <noframes id="kvf8t"><option id="kvf8t"></option></noframes>

    首頁 500強 活動 榜單 商業 科技 領導力 專題 品牌中心
    雜志訂閱

    OpenAI前董事首度披露:為何要解雇創始人奧爾特曼

    OpenAI非營利性董事會前成員海倫·托納在訪談中怒批山姆·奧爾特曼。

    文本設置
    小號
    默認
    大號
    Plus(0條)

    去年11月,OpenAI曾短暫罷免了山姆·奧爾特曼的首席執行官職務。圖片來源:JEROD HARRIS—GETTY IMAGES FOR VOX MEDIA

    作為幕后主使之一,海倫·托納一度將山姆·奧爾特曼掃地出門,但這場引發劇烈震動的政變最終以失敗告終。在近期一次重磅訪談中,她公開指責奧爾特曼作為OpenAI掌門人一再失信于人,這是自11月的爆炸性事件發生后,托納首次對外公開發聲。

    海倫·托納就職于喬治城大學(Georgetown University),是一名人工智能政策專家,從2021年到去年年底因參與罷免奧爾特曼而辭職前,一直在控制著OpenAI的非營利董事會中任職。在員工集體離職的威脅下,奧爾特曼獲得新董事會任命,官復原職,而最初參與策劃這場政變的四人中,僅Quora首席執行官亞當·德安杰洛得到留任。

    對于外界有關她和其他OpenAI董事會成員恐懼技術進步的猜測,托納予以了否認。相反,她表示,之所以會發生此次政變,是因為奧爾特曼的行事風格明顯缺乏誠信,在進行關鍵決策時不會提前溝通,一步步削弱了彼此之間的信任。

    在5月28日上線的《TED AI秀》上,她說:“多年來,山姆一直隱瞞信息,歪曲公司情況,有時甚至直接對董事會撒謊,讓董事會很難真正履行自己的職責”。

    據托納表示,甚至包括ChatGPT的發布,奧爾特曼都沒有提前告知董事會(2022年11月,ChatGPT首次公開亮相,旋即引發GenAI狂潮)。她說:“我們也是看了推特才知道ChatGPT已經發布問世”。

    托納聲稱,奧爾特曼總能找出借口來減淡化董事會的擔憂,這也是董事會長期以來沒有采取任何措施的原因。

    她接著說:“山姆總能找出一些聽起來無傷大雅的借口,讓你覺得沒什么大不了的事情,或者只是誤會之類的而已。但最終的結果是,由于這種情況多年來一再上演,我們四個決定解雇他的人最后一致認為,山姆的話根本不能信,從董事會的角度來說,他的做法完全無法接受”。

    OpenAI沒有回復《財富》的置評請求。

    托納說,去年10月,她參與發表了一篇論文,文中表示Anthropic的AI安全表現優于OpenAI,這讓奧爾特曼大為光火。

    她繼續說道:“問題是,論文發表后,山姆又開始欺騙其他董事會成員,試圖把我擠出董事會,再次傷害了我們對他的信任”,并補充說,該行為發生時,董事會正在“非常認真地討論要不要解雇他”。

    “在過去幾年,亮眼的產品成為眾人矚目的焦點,安全文化和流程則開始‘退居二線’?!睋P·萊克說。

    如果孤立地看,托納對奧爾特曼這樣或那樣的抨擊或許可以看作是政變失敗主謀出于酸葡萄心理而發的牢騷。但OpenAI前高級AI安全研究員揚·萊克和斯嘉麗·約翰遜也都作出過類似批評,也能在一定程度上印證她所描述的那種缺乏誠信的行為方式。

    自我監管的嘗試注定將以失敗告終

    斯嘉麗·約翰遜稱,奧爾特曼曾找到她,希望在OpenAI最新的旗艦產品——ChatGPT語音機器人中使用她的聲音,該語音機器人可以與用戶進行對話,(如果配上斯嘉麗的聲音)會讓人聯想到其在電影《她》中扮演的角色。斯嘉麗拒絕了奧爾特曼的請求,但她懷疑后者可能違背她的意愿在該機器人中混入了自己的部分聲音。OpenAI對她的說法予以了反駁,但還是同意暫停使用該機器人。

    然后是萊克,他曾是負責創建防護措施、確保人類能夠控制超智能AI團隊的聯合負責人。本月離職之后,他表示自己清楚地意識到,管理層無意按照承諾將寶貴資源投向自己的團隊,并對前雇主發出了嚴厲斥責。(5月28日,他加入了托納去年10月稱贊過的Anthropic,也是OpenAI的競爭對手)。

    在AI安全團隊的主要成員各奔東西之后,OpenAI徹底解散了該團隊,將控制權統一交到了奧爾特曼和他的盟友手中。至于那些負責追求財務成果最大化的人,是否是實施可能阻礙商業活動的防護措施的最佳人選,我們拭目以待。

    盡管有些員工同樣心存疑慮,但除萊克外,很少有人愿意公開發聲。幸虧有了Vox本月早些時候的報道,大家才發現,員工之所以都保持沉默,一大原因是受制一條罕見的“非貶損條款”,如違反該條款,員工很可能失去自己在這家或為全球最炙手可熱初創公司的兌現股權。

    OpenAI前員工雅各布·希爾頓在X上發文稱,“一年多前,在我離開OpenAI 時,簽署了一份非貶損協議,并需對協議本身保密,不為別的,只為避免失去我的兌現股權?!?/p>

    在此之前,OpenAI前安全研究員丹尼爾·科科塔伊洛曾表示,為了不受退出協議的約束,他自愿放棄了自己的股權。奧爾特曼后來證實了這一說法的真實性。

    他在5月早些時候發帖說:“雖然我們沒有收回過股權,但這種條款確實不應該出現在我們的任何文件或溝通文件之中。這是我的責任,也是我在運營OpenAI的過程中為數不多真正感到尷尬的一次。我不知道會發生這樣的事情,我應該對此早做了解?!?/p>

    針對有關OpenAI股權處理事宜的報道,山姆·奧爾特曼表示,“我們從未收回過任何人的兌現股權,如果有人不愿簽署離職協議(或不同意非貶損協議),我們也不會收回兌現股權。兌現股權就是兌現股權。就這么簡單?!?/p>

    托納近期在《經濟學人》雜志上發表了專欄文章,她和OpenAI前主管塔莎·麥考利在文章中主張,實踐證據證明,沒有AI公司能通過自我監管實現良好治理。

    她們寫道:“如果說有哪家公司能夠在以安全、合乎道德的方式開發先進AI系統的同時,成功實現自我管理,那一定是OpenAI。根據我們的經驗,我們認為,自我管理擋不住利潤激勵帶來的壓力?!保ㄘ敻恢形木W)

    譯者:梁宇

    審校:夏林

    作為幕后主使之一,海倫·托納一度將山姆·奧爾特曼掃地出門,但這場引發劇烈震動的政變最終以失敗告終。在近期一次重磅訪談中,她公開指責奧爾特曼作為OpenAI掌門人一再失信于人,這是自11月的爆炸性事件發生后,托納首次對外公開發聲。

    海倫·托納就職于喬治城大學(Georgetown University),是一名人工智能政策專家,從2021年到去年年底因參與罷免奧爾特曼而辭職前,一直在控制著OpenAI的非營利董事會中任職。在員工集體離職的威脅下,奧爾特曼獲得新董事會任命,官復原職,而最初參與策劃這場政變的四人中,僅Quora首席執行官亞當·德安杰洛得到留任。

    對于外界有關她和其他OpenAI董事會成員恐懼技術進步的猜測,托納予以了否認。相反,她表示,之所以會發生此次政變,是因為奧爾特曼的行事風格明顯缺乏誠信,在進行關鍵決策時不會提前溝通,一步步削弱了彼此之間的信任。

    在5月28日上線的《TED AI秀》上,她說:“多年來,山姆一直隱瞞信息,歪曲公司情況,有時甚至直接對董事會撒謊,讓董事會很難真正履行自己的職責”。

    據托納表示,甚至包括ChatGPT的發布,奧爾特曼都沒有提前告知董事會(2022年11月,ChatGPT首次公開亮相,旋即引發GenAI狂潮)。她說:“我們也是看了推特才知道ChatGPT已經發布問世”。

    托納聲稱,奧爾特曼總能找出借口來減淡化董事會的擔憂,這也是董事會長期以來沒有采取任何措施的原因。

    她接著說:“山姆總能找出一些聽起來無傷大雅的借口,讓你覺得沒什么大不了的事情,或者只是誤會之類的而已。但最終的結果是,由于這種情況多年來一再上演,我們四個決定解雇他的人最后一致認為,山姆的話根本不能信,從董事會的角度來說,他的做法完全無法接受”。

    OpenAI沒有回復《財富》的置評請求。

    托納說,去年10月,她參與發表了一篇論文,文中表示Anthropic的AI安全表現優于OpenAI,這讓奧爾特曼大為光火。

    她繼續說道:“問題是,論文發表后,山姆又開始欺騙其他董事會成員,試圖把我擠出董事會,再次傷害了我們對他的信任”,并補充說,該行為發生時,董事會正在“非常認真地討論要不要解雇他”。

    “在過去幾年,亮眼的產品成為眾人矚目的焦點,安全文化和流程則開始‘退居二線’?!睋P·萊克說。

    如果孤立地看,托納對奧爾特曼這樣或那樣的抨擊或許可以看作是政變失敗主謀出于酸葡萄心理而發的牢騷。但OpenAI前高級AI安全研究員揚·萊克和斯嘉麗·約翰遜也都作出過類似批評,也能在一定程度上印證她所描述的那種缺乏誠信的行為方式。

    自我監管的嘗試注定將以失敗告終

    斯嘉麗·約翰遜稱,奧爾特曼曾找到她,希望在OpenAI最新的旗艦產品——ChatGPT語音機器人中使用她的聲音,該語音機器人可以與用戶進行對話,(如果配上斯嘉麗的聲音)會讓人聯想到其在電影《她》中扮演的角色。斯嘉麗拒絕了奧爾特曼的請求,但她懷疑后者可能違背她的意愿在該機器人中混入了自己的部分聲音。OpenAI對她的說法予以了反駁,但還是同意暫停使用該機器人。

    然后是萊克,他曾是負責創建防護措施、確保人類能夠控制超智能AI團隊的聯合負責人。本月離職之后,他表示自己清楚地意識到,管理層無意按照承諾將寶貴資源投向自己的團隊,并對前雇主發出了嚴厲斥責。(5月28日,他加入了托納去年10月稱贊過的Anthropic,也是OpenAI的競爭對手)。

    在AI安全團隊的主要成員各奔東西之后,OpenAI徹底解散了該團隊,將控制權統一交到了奧爾特曼和他的盟友手中。至于那些負責追求財務成果最大化的人,是否是實施可能阻礙商業活動的防護措施的最佳人選,我們拭目以待。

    盡管有些員工同樣心存疑慮,但除萊克外,很少有人愿意公開發聲。幸虧有了Vox本月早些時候的報道,大家才發現,員工之所以都保持沉默,一大原因是受制一條罕見的“非貶損條款”,如違反該條款,員工很可能失去自己在這家或為全球最炙手可熱初創公司的兌現股權。

    OpenAI前員工雅各布·希爾頓在X上發文稱,“一年多前,在我離開OpenAI 時,簽署了一份非貶損協議,并需對協議本身保密,不為別的,只為避免失去我的兌現股權?!?/p>

    在此之前,OpenAI前安全研究員丹尼爾·科科塔伊洛曾表示,為了不受退出協議的約束,他自愿放棄了自己的股權。奧爾特曼后來證實了這一說法的真實性。

    他在5月早些時候發帖說:“雖然我們沒有收回過股權,但這種條款確實不應該出現在我們的任何文件或溝通文件之中。這是我的責任,也是我在運營OpenAI的過程中為數不多真正感到尷尬的一次。我不知道會發生這樣的事情,我應該對此早做了解?!?/p>

    針對有關OpenAI股權處理事宜的報道,山姆·奧爾特曼表示,“我們從未收回過任何人的兌現股權,如果有人不愿簽署離職協議(或不同意非貶損協議),我們也不會收回兌現股權。兌現股權就是兌現股權。就這么簡單?!?/p>

    托納近期在《經濟學人》雜志上發表了專欄文章,她和OpenAI前主管塔莎·麥考利在文章中主張,實踐證據證明,沒有AI公司能通過自我監管實現良好治理。

    她們寫道:“如果說有哪家公司能夠在以安全、合乎道德的方式開發先進AI系統的同時,成功實現自我管理,那一定是OpenAI。根據我們的經驗,我們認為,自我管理擋不住利潤激勵帶來的壓力?!保ㄘ敻恢形木W)

    譯者:梁宇

    審校:夏林

    One of the ringleaders behind the brief, spectacular, but ultimately unsuccessful coup to overthrow Sam Altman accused the OpenAI boss of repeated dishonesty in a bombshell interview that marked her first extensive remarks since November’s whirlwind events.

    Helen Toner, an AI policy expert from Georgetown University, sat on the nonprofit board that controlled OpenAI from 2021 until she resigned late last year following her role in ousting Altman. After staff threatened to leave en masse, he returned empowered by a new board with only Quora CEO Adam D’Angelo remaining from the original four plotters.

    Toner disputed speculation that she and her colleagues on the board had been frightened by a technological advancement. Instead she blamed the coup on a pronounced pattern of dishonest behavior by Altman that gradually eroded trust as key decisions were not shared in advance.

    “For years, Sam had made it very difficult for the board to actually do that job by withholding information, misrepresenting things that were happening at the company, in some cases outright lying to the board,” she told The TED AI Show in remarks published on Tuesday.

    Even the very launch of ChatGPT, which sparked the generative AI frenzy when it debuted in November 2022, was withheld from the board, according to Toner. “We learned about ChatGPT on Twitter,” she said.

    Toner claimed Altman always had a convenient excuse at hand to downplay the board’s concerns, which is why for so long no action had been taken.

    “Sam could always come up with some kind of innocuous-sounding explanation of why it wasn’t a big deal, or it was misinterpreted or whatever,” she continued. “But the end effect was that after years of this kind of thing, all four of us who fired him came to the conclusion that we just couldn’t believe things that Sam was telling us and that’s a completely unworkable place to be in as a board.”

    OpenAI did not respond to a request by Fortune for comment.

    Things ultimately came to a head, Toner said, after she co-published a paper in October of last year that cast Anthropic’s approach to AI safety in a better light than OpenAI, enraging Altman.

    “The problem was that after the paper came out Sam started lying to other board members in order to try and push me off the board, so it was another example that just like really damaged our ability to trust him,” she continued, adding that the behavior coincided with discussions in which the board was “already talking pretty seriously about whether we needed to fire him.”

    Taken in isolation, those and other disparaging remarks Toner leveled at Altman could be downplayed as sour grapes from the ringleader of a failed coup. The pattern of dishonesty she described comes, however, on the wings of similarly damaging accusations from a former senior AI safety researcher, Jan Leike, as well as Scarlett Johansson.

    Attempts to self-regulate doomed to fail

    The Hollywood actress said Altman approached her with the request to use her voice for its latest flagship product—a ChatGPT voice bot that users can converse with, reminiscent of the fictional character Johansson played in the movie Her. When she refused, she suspects, he may have blended in part of her voice, violating her wishes. The company disputes her claims but agreed to pause its use anyway.

    Leike, on the other hand, served as joint head of the team responsible for creating guardrails that ensure mankind can control hyperintelligent AI. He left this month, saying it had become clear to him that management had no intention of diverting valuable resources to his team as promised, leaving a scathing rebuke of his former employer behind in his wake. (On Tuesday he joined the same OpenAI rival Toner had praised in October, Anthropic.)

    Once key members of its AI safety staff had scattered to the wind, OpenAI disbanded the team entirely, unifying control in the hands of Altman and his allies. Whether those in charge of maximizing financial results are best entrusted with implementing guardrails that may prove a commercial hindrance remains to be seen.

    Although certain staffers were having their doubts, few outside of Leike chose to speak up. Thanks to?reporting by Vox earlier this month, it emerged that a key motivating factor behind that silence was an unusual nondisparagement clause that, if broken, would void an employee’s vesting equity in perhaps the hottest startup in the world.

    When I left @OpenAI a little over a year ago, I signed a non-disparagement agreement, with non-disclosure about the agreement itself, for no other reason than to avoid losing my vested equity. (Thread)

    This followed earlier statements by former OpenAI safety researcher Daniel Kokotajlo that he voluntarily sacrificed his share of equity in order not to be bound by the exit agreement. Altman later confirmed the validity of the claims.

    “Although we never clawed anything back, it should never have been something we had in any documents or communication,” he posted earlier this month. “This is on me and one of the few times I’ve been genuinely embarrassed running OpenAI; I did not know this was happening and I should have.”

    in regards to recent stuff about how openai handles equity:we have never clawed back anyone's vested equity, nor will we do that if people do not sign a separation agreement (or don't agree to a non-disparagement agreement). vested equity is vested equity, full stop.

    Toner’s comments come fresh on the heels of her op-ed in the Economist, in which she and former OpenAI director Tasha McCauley argued that no AI company could be trusted to regulate itself as the evidence showed.

    “If any company could have successfully governed itself while safely and ethically developing advanced AI systems it would have been OpenAI,” they wrote. “Based on our experience, we believe that self-governance cannot reliably withstand the pressure of profit incentives.”

    財富中文網所刊載內容之知識產權為財富媒體知識產權有限公司及/或相關權利人專屬所有或持有。未經許可,禁止進行轉載、摘編、復制及建立鏡像等任何使用。
    0條Plus
    精彩評論
    評論

    撰寫或查看更多評論

    請打開財富Plus APP

    前往打開
    熱讀文章
    色视频在线观看无码|免费观看97干97爱97操|午夜s级女人优|日本a∨视频
    <menuitem id="kvf8t"></menuitem>

    <strike id="kvf8t"><label id="kvf8t"><var id="kvf8t"></var></label></strike>
    <ruby id="kvf8t"><del id="kvf8t"></del></ruby>

    <ruby id="kvf8t"></ruby>

    <noframes id="kvf8t"><option id="kvf8t"></option></noframes>