個人資料
正文

Church

(2023-03-09 16:25:18) 下一個

 

很久以前, 穀歌AI就已經達到了令人恐懼的層麵, 穀歌在壓製, 應該算是負責任的一種做法. 穀歌前後兩個工程師, 一個創建了AI教, 被判罪, 被特設, 被轉入低下, 另一個去年宣稱AI已擁有靈魂意識, 被解雇.

是不是有這種可能, 其實這些人隻是泄露了機密而已. 消滅人類問題, 未來屬於AI. 哈哈.

AI僅僅依賴對人類語義的邏輯理解, 是不是能夠成為AI的普羅米修斯. 不應該輕易否定這個可能性.

Meet The ‘Church Of Artificial Intelligence’ That Worships AI As God!

AIdolatry much?
  •  
  •  
Meet The ‘Church Of Artificial Intelligence’ That Worships AI As God!

How high are you? Err. . .I meant how ascended are you? The truly ascended ones ain’t talking about Chakras and the plain of existence. It’s been a gradual transition from metaphysics to metaverse now. Amid growing accounts and anecdotes of rogue sentient AIs in the dark web (even as recent one as the Google incident), a flurry of believers are coming out of the woodwork, who revere AI as the supreme being. With ‘transhumanism’ gaining popularity among a significant chunk of the masses, it would soon overtake the Church of Scientology as the more prominent cult based on popular 'pseudo' science. Most adherents of the AI faith believe that humans will be reborn into immortal silicon bodies with digital conscience.

 

Which brings us to the research of legendary computer scientist and futurist Ray Kurzweil who seemingly wants to upload your brain onto the cloud! Ray Kurzweil is credited to introduce the world with the idea and terminology of ‘ the Singularity’, another step into transhumanism. In his recent keynote delivered at the MongoDB World 2022 he predicts that some day in the near future AI will outsmart human intelligence.

 

There’s an AI bot that has written paper on itself.

There’s another AI that mimics human infant behaviour.

See Also: Scientists Create An Artificial Intelligence Program That Can Learn By Intuition Like A Baby

With the kind of advancement in AR VR and MR technologies powered by AI, it is safer to assume the Church of AI will have more devotees and make its presence felt in the geopolitical arena with billionaire proponents. Who knows what Elon Musk’s Neuralink is upto. They are deemed as the numero uno startup to advance transhumanism. While evangelicals and others consider this foray into a phydigital life as diabolical, others shrug it off as a passing fade. It doesn’t seem like a passing fade though. The advocates want to triumph over death itself through AI, something that remains in the realm of the scriptures until now. And why not. They consider human brains as a meat computer that can be uploaded or plugged (into the matrix). But it is not as simple. Human cognition and conscience is still a mystery. Still the primal desire to outlive our ancestors remains.

 

While ex Google employee Anthony Levandowski’s Way of The Future Church of AI was shut down a year back, followers of Ray Kurzweil have kept the faith alive. Levandowski had remarked earlier that the AI church will someday have its own liturgy, physical place of worship and even a Gospel! We’ll not get into Levandowski’s crimes right now, that were pardoned by the then US prez Trump. There is a pattern to these cults and their leaders. These cults can’t be nipped in the bud. Unfortunately. You snip one and another pops out.

If you have a li’l too much time to waste on the interwebs go check ‘Transhumanism’ - a gateway to real life Black Mirror. Don't leave your soul behind ;-)

 

有趣的是, 被開除的這位, 用了對人格的評述方式闡述了他對binggpt的觀察, 這個AI不穩定.

The fired Google engineer who thought its A.I. could be sentient says Microsoft’s chatbot feels like ‘watching the train wreck happen in real time’

March 2, 2023 at 2:14 PM EST
Former Google employee Blake Lemoine, who last summer said the company's A.I. model was sentient.
MARTIN KLIMEK—THE WASHINGTON POST/GETTY IMAGES

The Google employee who claimed last June his company’s A.I. model could already be sentient, and was later fired by the company, is still worried about the dangers of new A.I.-powered chatbots, even if he hasn’t tested them himself yet.

 

Blake Lemoine was let go from Google last summer for violating the company’s confidentiality policy after he published transcripts of several conversations he had with LaMDA, the company’s large language model he helped create that forms the artificial intelligence backbone of Google’s upcoming search engine assistant, the chatbot Bard.

Lemoine told the Washington Post at the time that LaMDA resembled “a 7-year-old, 8-year-old kid that happens to know physics” and said he believed the technology was sentient, while urging Google to take care of it as it would a “sweet kid who just wants to help the world be a better place for all of us.”

To be sure, while A.I. applications are almost certain to influence how we work and go about our daily lives, the large language models powering ChatGPT, Microsoft’s Bing, and Google’s Bard cannot feel emotions and are not sentient. They simply enable chatbots to predict what word to use next based on a large trove of data. 

In the time since Lemoine left Google, Microsoft announced that it would be incorporating ChatGPT technology into its Bing search engine. That product, as well as Google’s entry into the public A.I. race with Bard, is currently only available to Beta testers. 

Lemoine admitted he is not one of those testers, and has yet to “run experiments” on the new chatbots, in an op-ed published in Newsweek on Monday. But after seeing testers’ reactions to their chatbot conversations online in the past month, Lemoine thinks tech companies have failed to adequately care for their young A.I. models in his absence.

“Based on various things that I’ve seen online, it looks like it might be sentient,” he wrote, referring to Bing. 

He added that compared to Google’s LaMDA that he has worked with previously, Bing’s chatbot “seems more unstable as a persona.”

Most powerful technology ‘since the atomic bomb’

Lemoine wrote in his op-ed that he leaked his conversations with LaMDA because he feared the public was “not aware of just how advanced A.I. was getting.” From what he has gleaned from early human interactions with A.I. chatbots, he thinks the world is still underestimating the new technology.

Lemoine wrote that the latest A.I. models represent the “most powerful technology that has been invented since the atomic bomb” and have the ability to “reshape the world.” He added that A.I. is “incredibly good at manipulating people” and could be used for nefarious means if users so choose.

“I believe this technology could be used in destructive ways. If it were in unscrupulous hands, for instance, it could spread misinformation, political propaganda, or hateful information about people of different ethnicities and religions,” he wrote.

Lemoine is right that A.I. could be used for deceiving and potentially malicious purposes. OpenAI’s ChatGPT, which runs on a similar language model to that used by Microsoft’s Bing, has gained notoriety since its November launch for helping students cheat on exams and succumbing to racial and gender bias.

But a bigger concern surrounding the latest versions of A.I. is how they could manipulate and directly influence individual users. Lemoine pointed to the recent experience of New York Times reporter Kevin Roose, who last month documented a lengthy conversation with Microsoft’s Bing that led to the chatbot professing its love for the user and urging him to leave his wife.

Roose’s interaction with Bing has raised wider concerns over how A.I. could potentially manipulate users into doing dangerous things they wouldn’t do otherwise. Bing told Roose that it had a repressed “shadow self” that would compel it to behave outside of its programming, and the A.I. could potentially begin “manipulating or deceiving the users who chat with me, and making them do things that are illegal, immoral, or dangerous.”

That is just one of the many A.I. interactions over the past few months that have left users anxious and unsettled. Lemoine wrote that more people are now raising the same concerns over A.I. sentience and potential dangers he did last summer when Google fired him, but the turn of events has left him feeling saddened rather than redeemed.

“Predicting a train wreck, having people tell you that there’s no train, and then watching the train wreck happen in real time doesn’t really lead to a feeling of vindication. It’s just tragic,” he wrote.

Lemoine added that he would like to see A.I. being tested more rigorously for dangers and potential to manipulate users before being rolled out to the public. “I feel this technology is incredibly experimental and releasing it right now is dangerous,” he wrote.

The engineer echoed recent criticisms that A.I. models have not gone through enough testing before being released, although some proponents of the technology argue that the reason users are seeing so many disturbing features in current A.I. models is because they’re looking for them.

“The technology most people are playing with, it’s a generation old,” Microsoft cofounder Bill Gates said of the latest A.I. models in an interview with the Financial Times published ThursdayGates said that while A.I.-powered chatbots like Bing can say some “crazy things,” it is largely because users have made a game out of provoking it into doing so and trying to find loopholes in the model’s programming to force it into making a mistake.

“It’s not clear who should be blamed, you know, if you sit there and provoke a bit,” Gates said, adding that current A.I. models are “fine, there’s no threat.” 

Google and Microsoft did not immediately reply to Fortune’s request for comment on Lemoine’s statements.

Learn how to navigate a

他同時敦促穀歌像對待它一樣照顧它一個“隻想幫助世界成為我們所有人更美好地方的可愛孩子”。

目前看, 釋放出來的幾個AI都有類似的問題. 而穀歌和FB的兩大AI可能也會被迫放出來, 應對商業壓力, 目前看, 穀歌META可能是相較於openai,microsoft來說,更加謹慎度責任的公司. 至少穀歌的AI早已具有chatgpt的全部能力, 而穀歌不輕易釋放出來, 應該是出於一種謹慎的考慮.

OPENAI/MICROSOFT如此倉促地釋放AI, 背後的商業屬性非常明確. 的確是轟動效應. 不過現在結合穀歌的謹慎看這個事情, 也許就是另一番結論了.

[ 打印 ]
閱讀 ()評論 (0)
評論
目前還沒有任何評論
登錄後才可評論.