<menuitem id="kvf8t"></menuitem>

    <strike id="kvf8t"><label id="kvf8t"><var id="kvf8t"></var></label></strike>
    <ruby id="kvf8t"><del id="kvf8t"></del></ruby>

    <ruby id="kvf8t"></ruby>

    <noframes id="kvf8t"><option id="kvf8t"></option></noframes>

    立即打開
    人工智能需要什么?同理心和監管

    人工智能需要什么?同理心和監管

    McKenna Moore 2018年12月18日
    隨著此項技術變得越發普及,應該把監管和同理心擺在首要位置。

    AI革命就要來了。

    采用機器學習這一關鍵人工智能技術的公司數量已經超過大家的想象。隨著此項技術變得越發普及,應該把監管和同理心擺在首要位置。

    上周三,在加州拉古納尼格召開的《財富》2018年最具影響力下一代女性峰會上,情感人工智能公司Affectiva的聯合創始人兼首席執行官拉娜·埃爾·卡柳比表示,在技術領域里,情商和智商同樣重要。她指出,考慮到人與技術互動的頻率以及技術在人們生活中越來越大的影響,把同理心融入技術之中很重要。

    埃爾·卡柳比說,要做到這一點,途徑之一就是讓多元化團隊來進行技術開發。對此她舉了個例子:中年白人男性在創建和訓練面部識別AI時用的都是和他們長相類似的照片,這就意味著這樣的AI在面對有色人種女性時往往不能正常發揮作用,甚至根本派不上用場。

    “這要歸結到設計相關算法的團隊身上,如果不是多元化團隊,他們就不會考慮這樣的AI在戴頭巾的女性面前表現如何?!卑枴た日f:“他們只是解決了自己所知范圍內的問題?!?/p>

    微軟AI部門的首席產品負責人納維里娜·辛格說,她在一個電子商務網站項目中遇到過一個完美的以同理心思維開發技術的例子。這個網站想讓印度消費者更方便地購買他們的商品。由于印度的識字率較低,這家公司就為不識字的用戶提供了語音轉文字功能。他們事先統一行動,用印度各地的方言和文化對AI進行了訓練,原因是在不同背景下,用戶說話時表達的意圖和內容都不一樣。IBM沃森部門的客戶關系總經理Inhi Cho Suh認為,意圖識別是目前AI面臨的最主要挑戰和機遇之一。

    現在,機器學習的另一大焦點是監管。與會者認為,隨著機器人和其他相關技術日臻完善,必需有法律來制約這種力量。Suh指出,應通過技術和監管來防止不正當使用。埃爾·卡柳比則強調,大學計算機科學和工程專業學生必須接受倫理教育。

    辛格提出用F.A.T.E.這個縮略語來代表開發和監管此類技術時應注意的關鍵問題。它們是公平(fairness)、負責(accountability)、透明(transparency)和倫理(ethics)。反恐技術公司Moonshot CVE的創始人維迪亞·拉瑪林漢姆說,雖然有很多關于AI技術的負面新聞,比如英國政治咨詢機構Cambridge Analytica非法獲取8700萬Facebook用戶數據的丑聞,但我們不能讓恐懼主導輿論。

    她指出:“出臺政策的動機不應該是害怕,而是應該在掌握相關知識并了解相關信息后制定政策?!保ㄘ敻恢形木W)

    譯者:Charlie

    審校:夏林

    The AI revolution is upon us.

    Machine learning, one of key artificial intelligence technologies, has already been deployed within more companies than you would expect. As it gains even greater adoption, regulation and empathy should be at the forefront.

    Rana el Kaliouby, co-founder and CEO of emotional AI company Affectiva, said at Fortune’s Most Powerful Women Next Gen 2018 in Laguna Niguel, Calif. on last Wednesday that EQ is just as important in technology as IQ. Because of the frequency with which people interact with technology and its growing impact on our lives, it’s important that empathy be built into it, she said.

    One way to do that, el Kaliouby said, is to have diverse teams work on the technology. In a example of the problem, she said that middle-aged white men usually create and train face recognition AI using images of people who look like themselves, which means the technology often doesn’t work as well, if at all, on women of color.

    “It goes back to the teams designing these algorithms, and if your team isn’t diverse they aren’t going to be thinking about how this will work on a woman wearing hijab,” she said. “You solve for the problems you know.”

    Navrina Singh, principal product lead of Microsoft AI, said that a perfect example of building technology with empathy in mind came to her during a project with an e-commerce site that trying to make it easier for customers in India to buy it products. Due to the low literacy rate in the country, the company built speech-to-text functionality for users who couldn’t read. Beforehand, the company made a concerted effort to train its AI in dialects and cultures from all around India, because the intent and meaning of speech varies based on background. Deciphering intent is one of the greatest challenges and opportunities in AI right now, Inhi Cho Suh, general manager of customer engagement at IBM Watson, said.

    Regulation is another big topic in machine learning at the moment. With bots and other related technology becoming more sophisticated, laws are necessary to check that power, the panelists agreed. Suh said that technology and regulation should be used to prevent misuse, while el Kaliouby stressed the need for mandatory ethics training for college computer science and engineering majors.

    Singh shared the acronym F.A.T.E., which stands for fairness, accountability, transparency and ethics, to sum up the key ideas to keep in mind when creating and regulating this technology. Although there is a lot of bad news about technology, like the Cambridge Analytica scandal, in which a British political firm accessed personal data on up to 87 million Facebook users, we must not let fear guide the debate, said Vidhya Ramalingham, founder of counter-terrorism technology company Moonshot CVE.

    “Policy should not be written out of fear, it should be written in an educated and informed manner,” she said.

    掃碼打開財富Plus App
    色视频在线观看无码|免费观看97干97爱97操|午夜s级女人优|日本a∨视频
    <menuitem id="kvf8t"></menuitem>

    <strike id="kvf8t"><label id="kvf8t"><var id="kvf8t"></var></label></strike>
    <ruby id="kvf8t"><del id="kvf8t"></del></ruby>

    <ruby id="kvf8t"></ruby>

    <noframes id="kvf8t"><option id="kvf8t"></option></noframes>