Tomorrow will be a very busy day for me, so I’ll get a head start.
The big debate about the Hamas attack on Israel has been whether Israel (and the US) were truly taken by surprise or whether some nefarious scheme by the Deep States of one or both countries allowed it to happen—in order to justify wider war. Scott Ritter today makes what to me is a clinching case that the Hamas attack succeeded because of a true intel failure. However, before we get to that I’m going to embed a 23 minute Youtube of Judge Nap interviewing Ray McGovern (who gives a big h/t to Ritter). McGovern provides a short version of what went wrong in a fairly lucid way:
Now, here’s the link to Ritter’s article:
SCOTT RITTER: Israel’s Massive Intelligence Failure
October 8, 2023
The origins of Israel’s intelligence failure on the Hamas attacks can be traced to the decision to rely on AI instead of the contrarian analysis born of the earlier intelligence failure of the 1973 Yom Kippur War.
For most of the article Ritter provides a sort of historical overview of the institutional culture at CIA and Israeli intelligence, as well as a history of relations between the two. Ritter was well placed to observe all this. The short story goes something like this.
After the triumph of Israel’s surprise attack on Egypt, Israel became complacent and adopted what Ritter terms a rigid “a priori” approach. What he means is that Israeli intel produced a construct of how the Arabs were supposed to act, and interpreted data as it came in according to that construct. That led to the disaster of the 1973 Yom Kippur war—the construct told them that, no matter what the data was saying, the Arabs weren’t supposed to attack. But they did. In the wake of that disaster Israeli intel adopted a “contrarian” approach to evaluating intel. They instituted what they called a “Doubting Thomas” system—what Catholics know as the “Devil’s Advocate”. All intel reporting was subjected to intense “contrarian” analysis by the Doubting Thomas, a sort of outsider challenging the interpretation of the data.
That worked well but a new director went back to the hold a priori approach. That was a major contributing factor to the current disaster. But there was more—the reliance upon AI to process data combined with a foolish, hubristic leak:
Unit 8200 likewise has spent billions of dollars creating intelligence collection capabilities which vacuum up every piece of digital data coming out of Gaza — cell phone calls, e-mails, and SMS texting. Gaza is the most photographed place on the planet, and between satellite imagery, drones, and CCTV, every square meter of Gaza is estimated to be imaged every 10 minutes.
This amount of data is overwhelming for standard analysis techniques relying on the human mind. To compensate for this, Israel developed a huge artificial intelligence (AI) capability which it then weaponized against Hamas in the short but deadly 11-day conflict with Hamas in 2021, named Guardian of the Walls.
Unit 8200 developed several unique algorithms which used immense databases derived from years of raw intelligence data collected from every possible source of information.
Building upon concepts of machine learning and algorithm-driven warfare that have been at the forefront of Israeli military research and development for decades, Israeli intelligence was able to use AI to not only select targets, but also to anticipate Hamas actions.
This ability to predict the future, so to speak, helped shape Israeli assessments about Hamas’s intent in the lead up to the 2023 Yom Kippur attacks.
Israel’s fatal mistake was to openly brag about the role AI played in Operation Guardian of the Walls. Hamas was apparently able to take control of the flow of information being collected by Israel.
There has been much speculation about Hamas “going dark” regarding cell phone and computer usage to deny Israel the data that is contained in those means of communication. But “going dark” would have, by itself, been an intelligence indicator, one that AI would have certainly picked up.
Instead, it’s highly probable that Hamas maintained an elaborate communications deception plan, maintaining a level of communications sufficient in quantity and quality to avoid being singled out by AI — and by Israeli analysts deviating from the norm.
In the same way, Hamas would likely have maintained its physical profile of movement and activity to keep the Israeli AI algorithms satisfied that nothing strange was afoot.
This also meant any activity — such as training related to paragliding or amphibious operations — that might be detected and flagged by Israeli AI was done to avoid detection.
The Israelis had become prisoners of their own successes in intelligence collection.
By producing more data than standard human-based analytical methodologies could handle, the Israelis turned to AI for assistance and, because of the success of AI during the 2021 operations against Gaza, developed an over reliance upon the computer-based algorithms for operational and analytical purposes.
Turning from the Contrarian
The origins of Israel’s massive intelligence failure regarding the 2023 Hamas Yom Kippur attacks can be traced to the decision by Amod Gilad to divorce Israel from the legacy of contrarian analysis born of the intelligence failure of the 1973 Yom Kippur War that produced the same over-reliance on inductive reasoning and intuition, which led to the failure to begin with.
AI is only as good as the data and algorithms used to produce the reports. If the human component of AI — those who program the algorithms —are corrupted by flawed analytical methodologies, then so, too, will the AI product, which replicates these methodologies on a larger scale.
I find Ritter’s analysis highly persuasive, especially given his intimate knowledge of the inner workings of the intel world.
There’s been a documented on going hack of Israel’s defenses since last March. Degradation, by network infiltrators, of the entire Israeli Defense and Intelligence system. Patrick Byrne covered it pretty thoroughly and it’s beyond disgusting the reasons a country’s people were subjected to terrorists was on behalf of a fetishist group’s trans agenda.
https://threadreaderapp.com/thread/1711440905943572918.html
Is it an 'intel failure' if the Intelligence Apparatus is warned of a likely attack in advance and the warning is 'ignored'? Or is it a different kind of 'failure'?
https://www.moonofalabama.org/2023/10/egypt-claims-it-warned-israel-of-upcoming-attack.html#comments