NEW: I took a look "the Gospel", Israel's AI targeting system. Broadly speaking I think there's two things I've learned: Nobody knows for sure whether it works as advertised. Militaries around the world are going to adopt similar tech soon.
17
230
554
158
So where did the Gospel come from? It was developed by Unit 8200, Israel's signals intelligence service (akin to the US NSA). One of the earliest mentions I could find was a 2020 IDF "Innovation Award" it won.
Among its earliest uses was the in the 2021 operation "Guardians of the Wall" against Hamas in the Gaza Strip. This Israeli military publically mentions its use along with another AI system known as "the Alchemist."
In fact, the IDF has been pretty public about using the Gospel in 2021. According to press reports, the system generated around 200 targets in that earlier conflict.
Israel says that it struck roughly 1500 targets total during Guardians of the Wall. So roughly 13% of the targets in that conflict were chosen by AI, according to the IDF's numbers.
There were reportedly some issues. An after-action report by
said that the Israelis learned their AI systems were only being trained on targets that had been selected by intelligence analysts. The intel agencies didn't keep records of the targets they'd REJECTED...
And just because humans are in the loop doesn't mean it's safe. "Automation Bias" can lead humans to go along with the decisions of computers, even when they're wrong. And the very speed of the system likely makes it harder to make human judgements, points out Lucy Suchman.
Finally, this entire realm of AI targeting is in a legal gray zone. Although the legal experts I spoke to unanimously agreed that commanders are still responsible for following the LOAC, WHICH commanders are responsible if an AI makes a poor call?
Despite all these concerns, one thing EVERONE seemed to agree on is that AI in the military is only going to grow. As former DIA chief Bob Ashley told me: "You're going to make decisions faster than your opponent, that's really what it's about," he says.
I did a story about how the US is using AI in its surveillance systems earlier in the year. It's not too far off from the Israelis, though I don't know that the Americans have closed the loop on targeting in the same way. (Audio only, use yer ears)
And I'm increasingly convinced this is a step on the road to fully autonomous battlefield systems.
laid out a quite convincing case in a paper he wrote on AI in Urban Warfare. Lethal AI will become just another tool in the arsenal...
The big limiting factor at the moment is probably robotics, but AI is making quite rapid strides in that area too. Deep reinforcement learning was used to train a drone earlier this year, and it outperformed human champions on a closed course.
The question is: What's gained and what's lost with automated rapid decision-making? It is possible that AI could increase the lethality of weapons (though Gospel has yet to prove that). But it comes at the price of deliberation, restraint and opportunities for de-escalation.
As
put it rather succinctly, the future will be filled with "Human combat teams augmented by really lethal weapons to fight these hideous kinds of medieval fights." So you know, happy Thursday.
GIF
ALT
And just because humans are in the loop doesn't mean it's safe. "Automation Bias" can lead humans to go along with the decisions of computers, even when they're wrong. And the very speed of the system likely makes it harder to make human judgements, points out Lucy Suchman.
Finally, this entire realm of AI targeting is in a legal gray zone. Although the legal experts I spoke to unanimously agreed that commanders are still responsible for following the LOAC, WHICH commanders are responsible if an AI makes a poor call?
Despite all these concerns, one thing EVERONE seemed to agree on is that AI in the military is only going to grow. As former DIA chief Bob Ashley told me: "You're going to make decisions faster than your opponent, that's really what it's about," he says.
I did a story about how the US is using AI in its surveillance systems earlier in the year. It's not too far off from the Israelis, though I don't know that the Americans have closed the loop on targeting in the same way. (Audio only, use yer ears)
And I'm increasingly convinced this is a step on the road to fully autonomous battlefield systems.
laid out a quite convincing case in a paper he wrote on AI in Urban Warfare. Lethal AI will become just another tool in the arsenal...
The big limiting factor at the moment is probably robotics, but AI is making quite rapid strides in that area too. Deep reinforcement learning was used to train a drone earlier this year, and it outperformed human champions on a closed course.
The question is: What's gained and what's lost with automated rapid decision-making? It is possible that AI could increase the lethality of weapons (though Gospel has yet to prove that). But it comes at the price of deliberation, restraint and opportunities for de-escalation.
As
put it rather succinctly, the future will be filled with "Human combat teams augmented by really lethal weapons to fight these hideous kinds of medieval fights." So you know, happy Thursday.
GIF
ALT
There are plenty of concerns that the Gospel doesn't work as advertised.
points out that if the training data is inadequate, then this system is "is really not far from indiscriminate targeting.""
And just because humans are in the loop doesn't mean it's safe. "Automation Bias" can lead humans to go along with the decisions of computers, even when they're wrong. And the very speed of the system likely makes it harder to make human judgements, points out Lucy Suchman.
Finally, this entire realm of AI targeting is in a legal gray zone. Although the legal experts I spoke to unanimously agreed that commanders are still responsible for following the LOAC, WHICH commanders are responsible if an AI makes a poor call?
Despite all these concerns, one thing EVERONE seemed to agree on is that AI in the military is only going to grow. As former DIA chief Bob Ashley told me: "You're going to make decisions faster than your opponent, that's really what it's about," he says.
I did a story about how the US is using AI in its surveillance systems earlier in the year. It's not too far off from the Israelis, though I don't know that the Americans have closed the loop on targeting in the same way. (Audio only, use yer ears)
And I'm increasingly convinced this is a step on the road to fully autonomous battlefield systems.
laid out a quite convincing case in a paper he wrote on AI in Urban Warfare. Lethal AI will become just another tool in the arsenal...
The big limiting factor at the moment is probably robotics, but AI is making quite rapid strides in that area too. Deep reinforcement learning was used to train a drone earlier this year, and it outperformed human champions on a closed course.
The question is: What's gained and what's lost with automated rapid decision-making? It is possible that AI could increase the lethality of weapons (though Gospel has yet to prove that). But it comes at the price of deliberation, restraint and opportunities for de-escalation.
As
put it rather succinctly, the future will be filled with "Human combat teams augmented by really lethal weapons to fight these hideous kinds of medieval fights." So you know, happy Thursday.