<menuitem id="kvf8t"></menuitem>

    <strike id="kvf8t"><label id="kvf8t"><var id="kvf8t"></var></label></strike>
    <ruby id="kvf8t"><del id="kvf8t"></del></ruby>

    <ruby id="kvf8t"></ruby>

    <noframes id="kvf8t"><option id="kvf8t"></option></noframes>

    首頁 500強 活動 榜單 商業 科技 領導力 專題 品牌中心
    雜志訂閱

    面對人工智能,企業要三思而行

    Jonathan Vanian
    2021-02-07

    機器學習技術既能幫助公司在Facebook和推特上投放線上廣告,也能成為不良分子的宣傳工具,散播不實信息。

    文本設置
    小號
    默認
    大號
    Plus(0條)

    亞歷克斯?斯皮內利是商業軟件制造商LivePerson的首席技術專家,他認為美國近期的國會暴亂事件揭示了人工智能的潛在危險,雖然這項技術通常與親特朗普的暴徒無關。

    機器學習技術既能幫助公司在Facebook和推特上投放線上廣告,也能成為不良分子的宣傳工具,散播不實信息。

    舉個例子,2016年有人在Facebook上分享虛假新聞,平臺的人工智能系統隨后將這些文章推送給了用戶。最近,Facebook的人工智能技術還推薦用戶加入討論QAnon陰謀論的群組,平臺最終屏蔽了這一話題。

    斯皮內利談及親特朗普的暴徒時表示:“他們生活的世界充滿了不實信息和謊言?!?/p>

    人工智能不僅可以用來散播不實信息,它在隱私和面部識別等領域也存在問題,這讓不少企業在應用這項技術時三思而行。一些公司非常擔心人工智能相關的倫理問題,于是取消了與其相關的項目,或者根本就不啟動。

    斯皮內利表示,他已經取消LivePerson和以前所在公司的一些人工智能項目。出于對人工智能的擔憂,他沒有透露這些公司的名字。這位專家之前就職于亞馬遜、廣告巨頭麥肯世界集團和湯森路透。

    根據他的說法,這些項目涉及機器學習,通過分析客戶數據來預測用戶行為。隱私維權人士經常表達對這類項目的擔憂,因為它們依賴大量的個人信息。

    斯皮內利說道:“從哲學上講,我堅信如果要使用一個人的數據,對方必須知情同意?!?/p>

    企業人工智能的道德問題

    人工智能可以預測銷售業績,解讀法律文件,并讓客服機器人更加真實,因此過去幾年一直受到企業的支持。不過,與其相關的負面頭條新聞也是源源不斷。

    去年,IBM、微軟和亞馬遜禁止警方使用他們的面部識別軟件,因其總是錯認女性和有色人種。微軟和亞馬遜都希望能繼續向警方銷售軟件,但他們呼吁聯邦政府規范執法部門對該項技術的使用。

    IBM首席執行官阿爾溫德?克里希納更進一步表示,公司會永久暫停面部識別軟件業務,稱其反對任何“用于大規模監視、種族側寫、侵犯基本人權和自由”的技術。

    2018年,知名人工智能研究人員蒂姆尼特?格布魯和喬伊?布奧拉姆維尼發表了一篇研究論文,強調了面部識別軟件存在的偏見問題。魯曼?喬杜里是初創公司Parity AI的CEO,此前曾負責埃森哲咨詢公司的人工智能團隊,她表示一些化妝品公司因此暫停了人工智能項目,這些項目可以呈現化妝品在不同膚色的上妝效果,但這些公司擔心會造成對黑人女性的歧視。

    “很多公司對面部識別技術的熱情在這個時候冷卻下來,”喬杜里說道?!拔液突瘖y品行業的客戶開了會,所有的項目都停了?!?/p>

    谷歌最近出現的問題也促使企業反思人工智能。最近,上文提到的人工智能研究人員格布魯在離開谷歌后表示,這家公司對她的研究進行了審查。這份研究關注谷歌人工智能軟件的兩個問題,一個是它可以理解人類語言但會因此產生偏見,另一個是它在訓練中消耗大量電力會破壞環境。

    這對谷歌造成了不良影響,因為這家搜索巨頭以前也曾遇到過偏見問題。谷歌標榜自己是環境管理員,但它的Google Photos應用將黑人誤認為大猩猩。

    格布魯離職不久,谷歌便禁止另一位人工智能倫理研究員訪問公司的電腦系統,后者一直對這家公司持批評態度。谷歌一位發言人拒絕對研究人員或公司在道德層面的失誤發表評論,他反而引用首席執行官桑達爾?皮查伊和高管杰夫?迪恩的說法,稱公司正在評估格布魯離職的相關情況,并將繼續進行人工智能倫理研究。

    米里亞姆?沃格爾曾是美國司法部律師,如今擔任非營利組織EqualAI的負責人,這家組織幫助一些公司處理人工智能的偏見問題。她透露道,許多公司和人工智能研究人員正在密切關注谷歌的問題,其中一些人擔心,這會讓人以后不熱衷于研究與雇主商業利益無關的課題。

    “這件事引起了每個人的關注,”沃格爾如此評價格布魯的離職?!耙粋€在這個領域倍受贊賞和尊敬的領袖也會面臨失業風險,這一事實讓不少人心里一涼?!?/p>

    谷歌一向把自己定位為人工智能倫理的領頭人,但這次失誤證明其德不配位。沃格爾希望其他公司不要過度反應,解雇或噤聲質疑人工智能項目道德的員工。

    沃格爾說道:“希望這些公司不要覺得在公司內設立倫理部門就會制造緊張氣氛,導致事態升級到目前這個水平?!?/p>

    人工智能道德在向前發展

    阿比謝克?古普塔在微軟專注機器學習方面的工作,他也是蒙特利爾人工智能倫理研究所的創始人兼首席研究員。據他所言,與幾年前相比,現在的公司會更多地考慮人工智能的倫理問題,情況已經有所改善。

    而且,大家并不認為公司應該完全停止使用人工智能。舊金山附近的圣克拉拉大學有一間馬庫拉應用倫理學中心,其技術倫理主任布萊恩?格林表示,人工智能已是一項舉足輕重的技術,無法舍棄。

    “人們對歇業的恐懼要超過對歧視的恐懼,”格林說道。

    雖然LivePerson的斯皮內利對人工智能的一些用途感到擔心,但他的公司仍在大量投資自然語言處理等分支領域,讓電腦學習理解語言。他希望通過公開公司在人工智能和道德方面的立場,讓客戶相信LivePerson正在努力將危害降到最低。

    LivePerson和專業服務巨頭高知特以及保險公司哈門那都是EqualAI組織的成員,它們已經公開承諾將測試和監控其人工智能系統,以發現涉及偏見的問題。

    斯皮內利說道:“如果我們做得不好,請站出來質疑我們?!保ㄘ敻恢形木W)

    譯者:秦維奇

    亞歷克斯?斯皮內利是商業軟件制造商LivePerson的首席技術專家,他認為美國近期的國會暴亂事件揭示了人工智能的潛在危險,雖然這項技術通常與親特朗普的暴徒無關。

    機器學習技術既能幫助公司在Facebook和推特上投放線上廣告,也能成為不良分子的宣傳工具,散播不實信息。

    舉個例子,2016年有人在Facebook上分享虛假新聞,平臺的人工智能系統隨后將這些文章推送給了用戶。最近,Facebook的人工智能技術還推薦用戶加入討論QAnon陰謀論的群組,平臺最終屏蔽了這一話題。

    斯皮內利談及親特朗普的暴徒時表示:“他們生活的世界充滿了不實信息和謊言?!?/p>

    人工智能不僅可以用來散播不實信息,它在隱私和面部識別等領域也存在問題,這讓不少企業在應用這項技術時三思而行。一些公司非常擔心人工智能相關的倫理問題,于是取消了與其相關的項目,或者根本就不啟動。

    斯皮內利表示,他已經取消LivePerson和以前所在公司的一些人工智能項目。出于對人工智能的擔憂,他沒有透露這些公司的名字。這位專家之前就職于亞馬遜、廣告巨頭麥肯世界集團和湯森路透。

    根據他的說法,這些項目涉及機器學習,通過分析客戶數據來預測用戶行為。隱私維權人士經常表達對這類項目的擔憂,因為它們依賴大量的個人信息。

    斯皮內利說道:“從哲學上講,我堅信如果要使用一個人的數據,對方必須知情同意?!?/p>

    企業人工智能的道德問題

    人工智能可以預測銷售業績,解讀法律文件,并讓客服機器人更加真實,因此過去幾年一直受到企業的支持。不過,與其相關的負面頭條新聞也是源源不斷。

    去年,IBM、微軟和亞馬遜禁止警方使用他們的面部識別軟件,因其總是錯認女性和有色人種。微軟和亞馬遜都希望能繼續向警方銷售軟件,但他們呼吁聯邦政府規范執法部門對該項技術的使用。

    IBM首席執行官阿爾溫德?克里希納更進一步表示,公司會永久暫停面部識別軟件業務,稱其反對任何“用于大規模監視、種族側寫、侵犯基本人權和自由”的技術。

    2018年,知名人工智能研究人員蒂姆尼特?格布魯和喬伊?布奧拉姆維尼發表了一篇研究論文,強調了面部識別軟件存在的偏見問題。魯曼?喬杜里是初創公司Parity AI的CEO,此前曾負責埃森哲咨詢公司的人工智能團隊,她表示一些化妝品公司因此暫停了人工智能項目,這些項目可以呈現化妝品在不同膚色的上妝效果,但這些公司擔心會造成對黑人女性的歧視。

    “很多公司對面部識別技術的熱情在這個時候冷卻下來,”喬杜里說道?!拔液突瘖y品行業的客戶開了會,所有的項目都停了?!?/p>

    谷歌最近出現的問題也促使企業反思人工智能。最近,上文提到的人工智能研究人員格布魯在離開谷歌后表示,這家公司對她的研究進行了審查。這份研究關注谷歌人工智能軟件的兩個問題,一個是它可以理解人類語言但會因此產生偏見,另一個是它在訓練中消耗大量電力會破壞環境。

    這對谷歌造成了不良影響,因為這家搜索巨頭以前也曾遇到過偏見問題。谷歌標榜自己是環境管理員,但它的Google Photos應用將黑人誤認為大猩猩。

    格布魯離職不久,谷歌便禁止另一位人工智能倫理研究員訪問公司的電腦系統,后者一直對這家公司持批評態度。谷歌一位發言人拒絕對研究人員或公司在道德層面的失誤發表評論,他反而引用首席執行官桑達爾?皮查伊和高管杰夫?迪恩的說法,稱公司正在評估格布魯離職的相關情況,并將繼續進行人工智能倫理研究。

    米里亞姆?沃格爾曾是美國司法部律師,如今擔任非營利組織EqualAI的負責人,這家組織幫助一些公司處理人工智能的偏見問題。她透露道,許多公司和人工智能研究人員正在密切關注谷歌的問題,其中一些人擔心,這會讓人以后不熱衷于研究與雇主商業利益無關的課題。

    “這件事引起了每個人的關注,”沃格爾如此評價格布魯的離職?!耙粋€在這個領域倍受贊賞和尊敬的領袖也會面臨失業風險,這一事實讓不少人心里一涼?!?/p>

    谷歌一向把自己定位為人工智能倫理的領頭人,但這次失誤證明其德不配位。沃格爾希望其他公司不要過度反應,解雇或噤聲質疑人工智能項目道德的員工。

    沃格爾說道:“希望這些公司不要覺得在公司內設立倫理部門就會制造緊張氣氛,導致事態升級到目前這個水平?!?/p>

    人工智能道德在向前發展

    阿比謝克?古普塔在微軟專注機器學習方面的工作,他也是蒙特利爾人工智能倫理研究所的創始人兼首席研究員。據他所言,與幾年前相比,現在的公司會更多地考慮人工智能的倫理問題,情況已經有所改善。

    而且,大家并不認為公司應該完全停止使用人工智能。舊金山附近的圣克拉拉大學有一間馬庫拉應用倫理學中心,其技術倫理主任布萊恩?格林表示,人工智能已是一項舉足輕重的技術,無法舍棄。

    “人們對歇業的恐懼要超過對歧視的恐懼,”格林說道。

    雖然LivePerson的斯皮內利對人工智能的一些用途感到擔心,但他的公司仍在大量投資自然語言處理等分支領域,讓電腦學習理解語言。他希望通過公開公司在人工智能和道德方面的立場,讓客戶相信LivePerson正在努力將危害降到最低。

    LivePerson和專業服務巨頭高知特以及保險公司哈門那都是EqualAI組織的成員,它們已經公開承諾將測試和監控其人工智能系統,以發現涉及偏見的問題。

    斯皮內利說道:“如果我們做得不好,請站出來質疑我們?!保ㄘ敻恢形木W)

    譯者:秦維奇

    Alex Spinelli, chief technologist for business software maker LivePerson, says the recent U.S. Capitol riot shows the potential dangers of a technology not usually associated with pro-Trump mobs: artificial intelligence.

    The same machine-learning tech that helps companies target people with online ads on Facebook and Twitter also helps bad actors distribute propaganda and misinformation.

    In 2016, for instance, people shared fake news articles on Facebook, whose A.I. systems then funneled them to users. More recently, Facebook's A.I. technology recommended that users join groups focused on the QAnon conspiracy, a topic that Facebook eventually banned.

    “The world they live in day in and day out is filled with disinformation and lies,” says Spinelli about the pro-Trump rioters.

    A.I.'s role in disinformation, and problems in other areas including privacy and facial recognition, are causing companies to think twice about using the technology. In some cases, businesses are so concerned about ethics related to A.I. that they are killing projects involving A.I. or never starting them to begin with.

    Spinelli says that he has canceled some A.I. projects at LivePerson and at previous employers that he declined to name because of concerns about A.I. He previously worked at Amazon, advertising giant McCann Worldgroup, and Thomson Reuters.

    The projects, Spinelli says, involved machine learning analyzing customer data in order to predict user behavior. Privacy advocates often raise concerns about such projects, which rely on huge amounts of personal information.

    "Philosophically, I’m a big believer in the use of your data being approved by you,” Spinelli says.

    Ethical problems in corporate A.I.

    Over the past few years, artificial intelligence has been championed by companies for its ability to predict sales, interpret legal documents, and power more realistic customer chatbots. But it's also provided a steady drip of unflattering headlines.

    Last year, IBM, Microsoft, and Amazon barred police use of their facial recognition software because it more frequently misidentifies women and people of color. Microsoft and Amazon both want to continue selling the software to police, but they called for federal rules about how law enforcement can use the technology.

    IBM CEO Arvind Krishna went a step further by saying his company would permanently suspend its facial recognition software business, saying that the company opposes any technology used "for mass surveillance, racial profiling, violations of basic human rights and freedoms."

    In 2018, high-profile A.I. researchers Timnit Gebru and Joy Buolamwini published a research paper highlighting bias problems in facial recognition software. In reaction, some cosmetics companies paused A.I. projects that would determine how makeup products would look on certain people's skin, for fear the technology could discriminate against Black women, says Rumman Chowdhury, the former head of Accenture’s responsible A.I. team and now CEO of startup Parity AI.

    “That was when lot of companies cooled down with how much they wanted to use facial recognition,” Chowdhury says. “I had meetings with clients in makeup, and all of it stopped.”

    Recent problems at Google have also caused companies to rethink A.I. More recently, Gebru, the A.I. researcher, left Google and then claimed that the company had censored some of her research. That research focused on bias problems with the company's A.I. software that understands human language and the fact that the software used huge amounts of electricity in its training, which could harm the environment.

    This reflected poorly on Google because the search giant has experienced bias problems in the past, when its Google Photos product misidentified Black people as gorillas, and the search giant champions itself as an environmental steward.

    Shortly after Gebru's departure, Google suspended computer access to another of its A.I. ethics researchers who has been critical of the search giant. A Google spokesperson declined to comment about the researchers or the company's ethical blunders. Instead, he pointed to previous statements by Google CEO Sundar Pichai and Google executive Jeff Dean saying that the company is conducting a review of the circumstances of Gebru's departure and is committed to continuing its A.I. ethics research.

    Miriam Vogel, a former Justice Department lawyer who now heads the EqualAI nonprofit, which helps companies address A.I. bias, says many companies and A.I. researchers are paying close attention to Google’s A.I. problems. Some fear that the problems may have a chilling impact on future research about topics that don't align with their employers' business interests.

    “This issue has captured everyone’s attention,” Vogel says about Gebru leaving Google. “It took their breath away that someone who was so widely admired and respected as a leader in this field could have their job at risk.”

    Although Google has positioned itself as a leader in A.I. ethics, the company's missteps point to a contradiction with that high-profile crown. Vogel hopes that companies don’t overreact by firing or silencing their own employees who question the ethics of certain A.I. projects.

    “I would hope companies do not take fear that by having an ethical arm of their organization that they would create tensions that would lead to an escalation at this level,” Vogel says.

    A.I. ethics going forward

    Still, the fact that companies are thinking about A.I. ethics is an improvement from a few years ago, when they gave the issue relatively little thought, says Abhishek Gupta, who focuses on machine learning at Microsoft and is founder and principal researcher of the Montreal AI Ethics Institute.

    And no one thinks companies will completely stop using A.I. Brian Green, the director of technology ethics at the Markkula Center for Applied Ethics at Santa Clara University, near San Francisco, says it's become too important of a tool to drop.

    “The fear of going out of business trumps the fear of discrimination,” Green says.

    And while LivePerson's Spinelli worries about some uses of A.I., his company is still heavily investing in its subsets like natural language processing, in which computers learn to understand language. He’s hoping that by being public about the company’s stance on A.I. and ethics, customers will trust that LivePerson is trying to minimize any harms.

    LivePerson, along with professional services giant Cognizant and insurance firm Humana, are members of the EqualAI organization and have made public pledges that they will test and monitor their A.I. systems for problems involving bias.

    Says Spinelli, “Call us out if we fail.”

    財富中文網所刊載內容之知識產權為財富媒體知識產權有限公司及/或相關權利人專屬所有或持有。未經許可,禁止進行轉載、摘編、復制及建立鏡像等任何使用。
    0條Plus
    精彩評論
    評論

    撰寫或查看更多評論

    請打開財富Plus APP

    前往打開
    熱讀文章
    色视频在线观看无码|免费观看97干97爱97操|午夜s级女人优|日本a∨视频
    <menuitem id="kvf8t"></menuitem>

    <strike id="kvf8t"><label id="kvf8t"><var id="kvf8t"></var></label></strike>
    <ruby id="kvf8t"><del id="kvf8t"></del></ruby>

    <ruby id="kvf8t"></ruby>

    <noframes id="kvf8t"><option id="kvf8t"></option></noframes>