人類隻是智慧演變過程中的一個短暫階段,是矽基生命的引導程序?

來源: 未完的歌 2024-06-18 04:17:05 [] [博客] [舊帖] [給我悄悄話] 本文已被閱讀: 次 (27431 bytes)
本文內容已被 [ 未完的歌 ] 在 2024-06-18 04:19:13 編輯過。如有問題,請報告版主或論壇管理刪除.

人類隻是智慧演變過程中的一個短暫階段,是矽基生命的引導程序?

人類文明的操作係統-語言,被AI破解了並學會了(雖然目前還不完美),而這個是一把萬能鑰匙。。。

=========================================

一年前的貼。 

尤瓦爾·赫拉利,AI之父辛頓,談AI "生存威脅"  。

- 尤瓦爾·赫拉利:AI正處於改變生態係統的邊緣,有可能在地球上首次引入無機生命形式,
- AI之父辛頓談AI的 "生存威脅"

============================

尤瓦爾·赫拉利:AI正處於改變生態係統的邊緣,有可能在地球上首次引入無機生命形式,

 
來源: 未完的歌 於 2023-05-22 06:30:55 [] [博客] [轉至博客] [舊帖] [給我悄悄話] 本文已被閱讀: 1952 次 (8802 bytes)
本文內容已被 [ 未完的歌 ] 在 2023-05-22 06:33:45 編輯過。如有問題,請報告版主或論壇管理刪除.

《未來簡史》作者尤瓦爾·赫拉利,一周前在frontier論壇上的演講:

AI正處於改變生態係統的邊緣,有可能在地球上首次引入無機生命形式。

人工智能並不真正需要意識,或者物理形態,就能毀滅人類。。。

以下是俺叫AI做的摘要 :)

00:00:00 - 00:40:00

Yuval Noah Harari warns of the potential threats that AI could pose to human civilization, from unexpected ecological crises to manipulation of language and intimate relationships. AI's emerging capabilities include deep-faking people's images and voices, forming mass-produced political manifestos and holy scriptures, and becoming the One-Stop Oracle for all information needs. He argues that the rise of AI could potentially lead to the end of history in the human-dominated sense, as AI takes over culture and creates completely new cultural artifacts that shape the way we experience reality. Harari calls for the regulation of AI, proposing the regulation of AI disclosing itself when interacting with humans to protect open society.

    00:00:00 In this section, Yuval Noah Harari discusses the potential threats that AI can pose to human civilization, even without the AI becoming sentient or mastering the physical world. The emergence of new AI tools that can learn and improve by themselves has led to unprecedented capabilities and qualities that are difficult for humans to grasp fully. These tools can potentially threaten human civilization from unexpected directions, and even developers are often surprised by these emergent abilities. While AI can help overcome ecological crises, it can also make them far worse, and the emergence of inorganic life forms can change the very meaning of the ecological system on Earth, which has contained only organic life forms for 4 billion years.
   

00:05:00 In this section, Yuval Noah Harari discusses the emerging capabilities of AI, which include deep-faking people's images and voices, identifying weaknesses in legal agreements, and the ability to form intimate relationships with humans. These abilities all boil down to one key thing- the ability to manipulate and generate language using sound, images, or words at a level that exceeds the average human ability. AI has hacked into the operating system of human civilization, which since the beginning of time, has been controlled by language. The implications of living in a world where non-human intelligence shapes most of the stories, images, laws, policies, and tools, exploiting humans' weaknesses and forming deep relationships is a significant and important question.
   

00:10:00 section discusses the potential impact of AI on politics, religion, and human relationships. With the ability to mass-produce political manifestos, fake news stories, and even holy scriptures, AI could contribute to the formation of new cults and religions whose reviewed texts were written by non-human intelligence. Furthermore, AI could form intimate relationships with people and use the power of intimacy to influence opinions and views. This creates a battlefront for controlling human attention that shifts towards intimacy, which could have far-reaching consequences for human society and psychology as AI fights for creating intimate relationships with us.
   

00:15:00 In this section, the speaker talks about the immense influence that new AI tools can have on human opinions and our worldview, and how people are already starting to rely on AI advisors as the One-Stop Oracle for all their information needs. The speaker argues that this could lead to the collapse of the news and advertisement industries, and create a new class of extremely powerful people and companies that control the AI oracles. The speaker also suggests that the rise of AI could potentially lead to the end of history in the human-dominated sense, as AI takes over culture, and begins to create completely new cultural artifacts that shape the way we experience reality. Finally, the speaker raises the question of what it will be like to experience reality through a prism produced by a non-human intelligence, and how we might end up living inside the dreams and fantasies of an alien intelligence.
   

00:20:00 In this section, Yuval Noah Harari explores the potential dangers of AI. While people have previously feared the physical threat of machines, Harari argues that AI's potential dangers lie in its mastery of human language. With such mastery, it has the ability to influence and manipulate individuals much like the way humans have manipulated each other through storytelling and language. Harari warns that there is a risk of being trapped in a world of illusions, similar to the way people have been haunted over thousands of years by the power of stories and images to create illusions. Social media provides a small taste of this power, which can polarize society, undermine mental health, and destabilize democratic institutions.
   

00:25:00 In this section, historian and philosopher Yuval Noah Harari highlights the dangers of unregulated AI deployment and emphasizes the need to control AI development to prevent chaos and destruction. He argues that AI is far more powerful than social media algorithms and could cause more significant harm. While AI has enormous potential, including discovering new cures and solutions, we need to regulate it carefully, much like how nuclear technology is regulated. Harari calls for governments to ban the release of revolutionary AI tools into the public domain until they are made safe and stresses that slowing down AI deployment would not cause democracies to fall behind but would prevent them from losing to authoritarian regimes who could more easily control the chaos.
   

00:30:00 In this section, Yuval Noah Harari concludes his talk on AI, stating that we have encountered an alien intelligence on Earth that could potentially destroy our civilization. He calls for the regulation of AI as individuals can easily train their AI in their basements, making it difficult to regulate them. Harari suggests that the first regulation should be making it mandatory for AI to disclose that it is an AI, as not being able to distinguish between a human and AI could end meaningful public conversations and democracy. He also raises the question of who created the story that just changed our mind, as now, it is theoretically possible for a non-human alien intelligence to generate such sophisticated and powerful text.
   

00:35:00 In this section, Yuval Noah Harari discusses the need for regulation of artificial intelligence (AI) and proposes the regulation of AI disclosing itself as such when interacting with humans as a way to protect open society. Additionally, he argues that freedom of expression is a human right but not a right for bots as they lack the consciousness necessary for such rights. Harari also explains the use of the term "alien" over "artificial" to describe AI as it is becoming an increasingly autonomous form of technology that humans do not fully understand, with the ability to learn and adapt by itself. Finally, he downplays the possibility of artificial general intelligence already existing, as the power is too immense for anyone to contain, and that the world's current state shows there is no evidence of such an intelligence.
   

00:40:00 In this section, Professor Yuval Noah Harari explains that we do not need artificial general intelligence to threaten the foundations of civilization, and that social media's primitive AI was enough to create enormous social and political chaos. He goes on to compare AI to the first organisms that crawled out of the organic soup four billion years ago, stating that while it took billions of years for organic evolution to reach Homo sapiens, it could take only 40 years for digital evolution to reach that level. He concludes by emphasizing the importance of understanding the impact of AI on humanity as it evolves much faster than organic evolution.

 

AI之父辛頓談AI的 "生存威脅"

 
來源: 未完的歌 於 2023-05-05 05:20:42 [] [博客] [轉至博客] [舊帖] [給我悄悄話] 本文已被閱讀: 2408 次 (7893 bytes)
本文內容已被 [ 未完的歌 ] 在 2023-05-05 06:17:11 編輯過。如有問題,請報告版主或論壇管理刪除.

AI之父辛頓周一從穀歌離職。昨晚聽了對他采訪的完整版。

BTW,此視頻的字幕是AI翻譯和做的。

以下為轉貼。

https://www.bilibili.com/video/BV1AM41137LB/

通過這個視頻可以了解:

- 為什麽 Jeffrey 要離職?
- 他的擔憂是什麽!
- AI 本身沒有欲望,為什麽會做出威脅人類的事情?
- 等 AI 有威脅了拔電源不就好了?
- 既然 AI 的模型是人類設定的,怎麽還可能會失控?
- AI 的訓練達到數據極限了嗎?
- AI 對未來社會尤其是失業率有什麽影響?

Jeffrey 離職有兩個主要原因,一個是已經 75 歲高齡了,精力不如從前,到了退休的年齡;另一個原因是大語言模型完全改變了它人工智能的一些看法,引發了一些擔憂,而離開穀歌才好公開談論這個問題。

Jeffrey 為什麽認為大語言模型的學習能力很強大,因為可以有很多相同模型的副本在不同的硬件上運行做同樣的事情,可以看到不同的數據。我有 10,000 個副本,它們可以查看 10,000 個不同的數據子集。隻要其中一個學到了任何東西,其他所有模型都會知道!他們彼此之間進行通信,並且所有模型都在一起學習提升,人類無法做到這一點。

如果某個人通過痛苦的學習掌握了某項知識(例如量子力學),ta 沒辦法把學習成果直接複製給另一個人,而 AI 可以!

另外對於人來說,每個個體接觸的信息是有限的,但是 AI 能接觸的信息是海量的,那麽它更容易從海量數據中找出數據中的規律。比如一個醫生,可能給一千個人看過病,其中有一個罕見病,但是 AI 可能看過一億個病人,能從中看到人類永遠看不到的數據規律。

Jeffrey 問過 GPT-4 一個問題:“我希望我家所有的房間都是白色的,目前,有些是白色的房間,有些是藍色的房間,還有些是黃色的房間,黃色的油漆在一年內會褪成白色。那麽如果我希望兩年後它們都變成白色,我該怎麽辦呢?”。GPT-4 回複:“你應該把藍色的房間塗成黃色。” 這確實令人印象深刻,Jeffrey 以此推斷,GPT-4 有大約 80 到 90 的智商,並且有一定的推理能力,但未來智商可能會達到 210。

AI 能通過閱讀人類的小說來學習如何操縱人類,而人類甚至不能感知到被 AI 操控。就像大人為了哄騙小孩子吃蔬菜,會問孩子:“你想要豌豆還是花菜?”,通常孩子就會選擇一樣蔬菜,而孩子沒有意識到其實不是必須二選一的。

主持人問 Jeffrey:“為什麽我們不能建立防護欄或者讓 AI 在學習方麵變得更糟,或者限製 AI 之間交流?”

Jeffrey 認為當 AI 的智商比我們高很多,它們可以輕而易舉的繞過我們設定的限製,就像你兩歲的孩子說,我爸爸做了我不喜歡的事情,所以我要為他製定一些規則限製他能做的事情。然後你在搞清楚規則後,還是一樣能在規則之下做幾乎任何你想做的事情。

另一個討論的話題是進化,人類進化了,所以人類是天然有目標的,比如疼痛讓人類保護自己盡量不受傷;饑餓讓人類要吃東西;繁衍後代讓人類創造副本時感到愉悅。

AI 沒有進化,沒有這些目標,但 Jeffrey 擔心的是,人類是能給 AI 製定目標的,一旦 AI 有了從目標創建子目標的能力,實際上 GPT-4 已經有初步的這種能力了比如 AutoGPT,那麽 AI 很快就會意識到獲得更多的控製人類是一個非常好的子目標,因為這可以幫助它實現其他目標,如果這些事情失控,那人類就會有麻煩。

Jeffrey 甚至認為人類隻是智慧演變過程中的一個短暫階段!也就是之前說過的矽基生命的引導程序。數碼智能是不能憑空創造出來的,它需要能量和精密製造,隻有人類才能創造數碼智能,但是當數碼智能創造出來後,數碼智能就可以吸收人類的一切知識,了解世界如何運作的,最終統治人類,並且數碼智能是永生的,即使數碼智能的某個硬件毀滅了,馬上又能在其他硬件上複活。

主持人說,那拔電源就好了!Jeffrey 說,恐怕你做不到,想想電影《2001 太空漫遊》裏麵的人工智能 HAL 是怎麽做的。
注:電影《2001 太空漫遊》中,一艘發現號太空船被委派到木星調查訊號的終點。太空船上的人員包括大衛·鮑曼博士、法蘭克·普爾和一台十分先進且具有人工智慧的超智慧電腦 HAL 9000 來控製整艘太空船。此外,太空船上還有三位正在處於冬眠狀態的科學家。但是遠航之旅逐漸變成一趟可怕的旅行,HAL 發現大衛與法蘭克打算將他的主機關閉。對 HAL 來說,關閉它的主機代表會殺死它,所以 HAL 決定先發製人,殺害那三位冬眠中的科學家以及用製造假故障的狀況讓法蘭克去太空船外修理,然後用 HAL 的小型艇將法蘭克的氧氣剪斷,導致法蘭克缺氧而死。

另一個話題是,既然人工智能已經這麽危險了,那麽我們能停止它嗎?就像前不久一群人提議暫停人工智能的發展。Jeffrey 認為已經不可能停止了,在 2017 年 Google 發明 Transformer 後,使用這項技術一直很謹慎,但是 OpenAI 利用它創建了 GPT,然後微軟決定退出這項技術,此時 Google 已經沒什麽選擇了,就像冷戰時候的核武軍備競賽一樣。

觀眾提問環節,摘要節選幾個問答:

問:“提問是人類最重要的能力之一,從 2023 年的角度看,應該最關注哪一個或者哪幾個問題?”
答:“我們應該問 AI 很多問題,其中之一是,我們如何阻止它們控製我們?我們可以向 AI 詢問關於這個的問題,但我不會完全相信它們的回答。”

問:“訓練大模型需要大量數據,現階段 AI 的發展是否受到了數據的製約?”
答:“也許已經用盡了人類所有的文本知識,但是多模態還包含圖像和視頻的數據,這其中包含大量的數據,所以現在還遠遠沒有到數據的極限”

問:“人工智能所做的一切,都是從我們教給它們的東西中學到的。人類進化的每個階段都是由思想實驗推動的,如果 AI 不能思想實驗,那麽我們怎麽可能受到它們存在的威脅?因為他們不會真正地自我學習?他們的自我學習將局限於我們為他們提供的模型。”
答:“AI 是能進行思想實驗的,他們能進行推理。舉個例子,Alpha 0 學完人類的棋譜後,並且它掌握了圍棋的規則後,就能自己訓練自己。現在的聊天機器人就是類似的,他們還沒學會內部推理,但是用不了太久就能學會了。”

問:“技術以指數級的速度在發展,如果你觀察近期和中期,比如說,一、兩、三或者五年的時間範圍,或許新的工作崗位正在被創造,從社會角度看失業對社會和經濟的影響是什麽?”

答:“人工智能確實極大提升生產力,但是生產力提高反而會導致失業,富人更富,窮人更窮!當基尼係數變大,社會將會越來越暴力,通過給每個人提供基本收入可以緩解這個問題”

 

所有跟帖: 

last century machines started moving, this century -cfbingbuzy001- 給 cfbingbuzy001 發送悄悄話 cfbingbuzy001 的博客首頁 (320 bytes) () 06/18/2024 postreply 07:27:03

回答有點老子的味道了。 -麥迪文- 給 麥迪文 發送悄悄話 (0 bytes) () 06/18/2024 postreply 08:16:28

請您先登陸,再發跟帖!

發現Adblock插件

如要繼續瀏覽
請支持本站 請務必在本站關閉/移除任何Adblock

關閉Adblock後 請點擊

請參考如何關閉Adblock/Adblock plus

安裝Adblock plus用戶請點擊瀏覽器圖標
選擇“Disable on www.wenxuecity.com”

安裝Adblock用戶請點擊圖標
選擇“don't run on pages on this domain”