以軍的人工智能係統Gospel

來源: tz2000 2023-12-14 15:13:28 [] [博客] [舊帖] [給我悄悄話] 本文已被閱讀: 次 (1827610 bytes)
本文內容已被 [ tz2000 ] 在 2023-12-14 15:14:28 編輯過。如有問題,請報告版主或論壇管理刪除.
 
NEW: I took a look "the Gospel", Israel's AI targeting system. Broadly speaking I think there's two things I've learned: Nobody knows for sure whether it works as advertised. Militaries around the world are going to adopt similar tech soon.
 
 
 
 
120.7K
Views
 
 
17
 
554
 
158
 
 
 
 
 
 
 
Jerry
 
 
 
Reply
 
 
 
 
 
 
 
 
 
 
 
 
So where did the Gospel come from? It was developed by Unit 8200, Israel's signals intelligence service (akin to the US NSA). One of the earliest mentions I could find was a 2020 IDF "Innovation Award" it won.
 
 
Image
 
Image
 
1
 
38
 
 
 
 
 
 
 
 
 
 
 
 
Among its earliest uses was the in the 2021 operation "Guardians of the Wall" against Hamas in the Gaza Strip. This Israeli military publically mentions its use along with another AI system known as "the Alchemist."
 
 
Image
 
Image
 
1
 
30
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
Israel says that it struck roughly 1500 targets total during Guardians of the Wall. So roughly 13% of the targets in that conflict were chosen by AI, according to the IDF's numbers.
 
 
Image
 
Image
 
1
 
31
 
 
 
 
 
 
 
 
 
 
 
 
There were reportedly some issues. An after-action report by said that the Israelis learned their AI systems were only being trained on targets that had been selected by intelligence analysts. The intel agencies didn't keep records of the targets they'd REJECTED...
 
 
 

Image

 
 
 
 
 
 
 
 
 
And just because humans are in the loop doesn't mean it's safe. "Automation Bias" can lead humans to go along with the decisions of computers, even when they're wrong. And the very speed of the system likely makes it harder to make human judgements, points out Lucy Suchman.
 
1
 
24
 
 
 
 
 
 
 
 
 
 
 
 
Finally, this entire realm of AI targeting is in a legal gray zone. Although the legal experts I spoke to unanimously agreed that commanders are still responsible for following the LOAC, WHICH commanders are responsible if an AI makes a poor call?
 
1
 
17
 
 
 
 
 
 
 
 
 
 
 
 
Despite all these concerns, one thing EVERONE seemed to agree on is that AI in the military is only going to grow. As former DIA chief Bob Ashley told me: "You're going to make decisions faster than your opponent, that's really what it's about," he says.
 
2
 
14
 
 
 
 
 
 
 
 
 
 
 
 
I did a story about how the US is using AI in its surveillance systems earlier in the year. It's not too far off from the Israelis, though I don't know that the Americans have closed the loop on targeting in the same way. (Audio only, use yer ears)
 
1
 
14
 
 
 
 
 
 
 
 
 
 
 
 
And I'm increasingly convinced this is a step on the road to fully autonomous battlefield systems. laid out a quite convincing case in a paper he wrote on AI in Urban Warfare. Lethal AI will become just another tool in the arsenal...
 
1
 
14
 
 
 
 
 
 
 
 
 
 
 
 
The big limiting factor at the moment is probably robotics, but AI is making quite rapid strides in that area too. Deep reinforcement learning was used to train a drone earlier this year, and it outperformed human champions on a closed course.
 
 
 
 
1
 
13
 
 
 
 
 
 
 
 
 
 
 
 
The question is: What's gained and what's lost with automated rapid decision-making? It is possible that AI could increase the lethality of weapons (though Gospel has yet to prove that). But it comes at the price of deliberation, restraint and opportunities for de-escalation.
 
1
 
12
 
 
 
 
 
 
 
 
 
 
 
As put it rather succinctly, the future will be filled with "Human combat teams augmented by really lethal weapons to fight these hideous kinds of medieval fights." So you know, happy Thursday.
 
 
 
 
 
GIF
ALT
 
 
 
 
 
 
17
 
 
 
 
 
 
 
 
 

 

 

 
 
 
 
 
And just because humans are in the loop doesn't mean it's safe. "Automation Bias" can lead humans to go along with the decisions of computers, even when they're wrong. And the very speed of the system likely makes it harder to make human judgements, points out Lucy Suchman.
 
1
 
24
 
 
 
 
 
 
 
 
 
 
 
 
Finally, this entire realm of AI targeting is in a legal gray zone. Although the legal experts I spoke to unanimously agreed that commanders are still responsible for following the LOAC, WHICH commanders are responsible if an AI makes a poor call?
 
1
 
17
 
 
 
 
 
 
 
 
 
 
 
 
Despite all these concerns, one thing EVERONE seemed to agree on is that AI in the military is only going to grow. As former DIA chief Bob Ashley told me: "You're going to make decisions faster than your opponent, that's really what it's about," he says.
 
2
 
14
 
 
 
 
 
 
 
 
 
 
 
 
I did a story about how the US is using AI in its surveillance systems earlier in the year. It's not too far off from the Israelis, though I don't know that the Americans have closed the loop on targeting in the same way. (Audio only, use yer ears)
 
1
 
14
 
 
 
 
 
 
 
 
 
 
 
 
And I'm increasingly convinced this is a step on the road to fully autonomous battlefield systems. laid out a quite convincing case in a paper he wrote on AI in Urban Warfare. Lethal AI will become just another tool in the arsenal...
 
1
 
14
 
 
 
 
 
 
 
 
 
 
 
 
The big limiting factor at the moment is probably robotics, but AI is making quite rapid strides in that area too. Deep reinforcement learning was used to train a drone earlier this year, and it outperformed human champions on a closed course.
 
 
 
 
1
 
13
 
 
 
 
 
 
 
 
 
 
 
 
The question is: What's gained and what's lost with automated rapid decision-making? It is possible that AI could increase the lethality of weapons (though Gospel has yet to prove that). But it comes at the price of deliberation, restraint and opportunities for de-escalation.
 
1
 
12
 
 
 
 
 
 
 
 
 
 
 
As put it rather succinctly, the future will be filled with "Human combat teams augmented by really lethal weapons to fight these hideous kinds of medieval fights." So you know, happy Thursday.
 
 
 
 
 
GIF
ALT
 
 
 
 
 
 
17
 
 
 
 
 
 
 
 
 
 
 
 
 
I don't see any issues as long as all targets are examined and prepared for strike by human analysts. If you want to find a quote from a text, does it matters if you find manually vs using a search function? ultimately it's the same result, who cares how you got there
 
 
 
 
 
 
 
 
There are plenty of concerns that the Gospel doesn't work as advertised. points out that if the training data is inadequate, then this system is "is really not far from indiscriminate targeting.""
 
1
 
22
 
 
 
 
 
 
 
 
 
 
 
 
And just because humans are in the loop doesn't mean it's safe. "Automation Bias" can lead humans to go along with the decisions of computers, even when they're wrong. And the very speed of the system likely makes it harder to make human judgements, points out Lucy Suchman.
 
1
 
24
 
 
 
 
 
 
 
 
 
 
 
 
Finally, this entire realm of AI targeting is in a legal gray zone. Although the legal experts I spoke to unanimously agreed that commanders are still responsible for following the LOAC, WHICH commanders are responsible if an AI makes a poor call?
 
1
 
17
 
 
 
 
 
 
 
 
 
 
 
 
Despite all these concerns, one thing EVERONE seemed to agree on is that AI in the military is only going to grow. As former DIA chief Bob Ashley told me: "You're going to make decisions faster than your opponent, that's really what it's about," he says.
 
2
 
14
 
 
 
 
 
 
 
 
 
 
 
 
I did a story about how the US is using AI in its surveillance systems earlier in the year. It's not too far off from the Israelis, though I don't know that the Americans have closed the loop on targeting in the same way. (Audio only, use yer ears)
 
1
 
14
 
 
 
 
 
 
 
 
 
 
 
 
And I'm increasingly convinced this is a step on the road to fully autonomous battlefield systems. laid out a quite convincing case in a paper he wrote on AI in Urban Warfare. Lethal AI will become just another tool in the arsenal...
 
1
 
14
 
 
 
 
 
 
 
 
 
 
 
 
The big limiting factor at the moment is probably robotics, but AI is making quite rapid strides in that area too. Deep reinforcement learning was used to train a drone earlier this year, and it outperformed human champions on a closed course.
 
 
 
 
1
 
13
 
 
 
 
 
 
 
 
 
 
 
 
The question is: What's gained and what's lost with automated rapid decision-making? It is possible that AI could increase the lethality of weapons (though Gospel has yet to prove that). But it comes at the price of deliberation, restraint and opportunities for de-escalation.
 
1
 
12
 
 
 
 
 
 
 
 
 
 
 
As put it rather succinctly, the future will be filled with "Human combat teams augmented by really lethal weapons to fight these hideous kinds of medieval fights." So you know, happy Thursday.
 

 

請您先登陸,再發跟帖!

發現Adblock插件

如要繼續瀏覽
請支持本站 請務必在本站關閉/移除任何Adblock

關閉Adblock後 請點擊

請參考如何關閉Adblock/Adblock plus

安裝Adblock plus用戶請點擊瀏覽器圖標
選擇“Disable on www.wenxuecity.com”

安裝Adblock用戶請點擊圖標
選擇“don't run on pages on this domain”