IOF Uses AI to Monitor Palestinian Civilian Communications

By Staff, Agencies
An investigation by The Guardian, +972 Magazine, and Local Call has revealed that the “Israeli” elite military intelligence unit, Unit 8200, has developed an advanced artificial intelligence [AI] system modeled after ChatGPT that seeks to enhance the “Israeli” occupation's surveillance capabilities.
The tool is trained on a vast collection of intercepted Palestinian phone calls and messages and will be used in the occupied Palestinian territories to make “Israel's” monitoring of civilians even more efficient and perhaps dystopic.
The AI model was designed to process and analyze spoken Arabic, drawing from extensive amounts of data gathered through the existing sweeping surveillance network.
Intelligence officers reportedly use the system like a chatbot, asking questions about individuals under watch and receiving AI-generated insights based on intercepted communications.
Chaked Roger Joseph Sayedoff, a former "Israeli" military intelligence technologist, publicly acknowledged the project during a 2023 AI conference, stating that it required “psychotic amounts” of Arabic-language data, as reported by the aforementioned outlets.
Former intelligence officials have also confirmed the initiative, explaining that earlier machine learning models paved the way for this more sophisticated AI system.
A source familiar with the project emphasized the far-reaching implications of the technology, stating: “AI amplifies power. It’s not just about stopping attacks—I can track human rights activists, monitor Palestinian construction, and know what every person in the West Bank is doing.”
The scale of data collection suggests Unit 8200 has amassed a vast archive of Palestinian communications, a practice that intelligence experts say amounts to blanket surveillance, as "Israeli" and Western intelligence sources told investigators that the AI-driven monitoring system allows authorities to collect and process information on an unprecedented scale.
However, human rights advocates warn that relying on AI for intelligence work carries serious risks, as AI models, including large language models [LLMs], are prone to errors, biases, and misinterpretations.
Zach Campbell, a senior researcher at Human Rights Watch, expressed concern over how such a tool might be used against Palestinians: “It’s a guessing machine, and ultimately, these guesses can be used to incriminate people.”
When questioned about the AI project, an IOF spokesperson said "Israeli" intelligence “employs various methods to identify and counter terrorist activity in the Middle East.
The idea for a ChatGPT-style AI system reportedly emerged after OpenAI launched its chatbot in 2022.
The project, however, gained momentum after October 7's Operation Al-Aqsa Flood, when reservists with AI expertise, including employees from Google, Meta, and Microsoft, were called back to military service to support the initiative.
Because existing AI language models were insufficient for intelligence work, Unit 8200 built its own system using vast amounts of intercepted Palestinian and Lebanese communications.
Sources say the goal was to centralize every Arabic conversation the unit had ever collected. Sayedoff, the former intelligence official, reportedly stated that the AI model was designed to focus on “dialects that hate us.”
The new AI model is believed to have enhanced "Israeli" intelligence operations, particularly in the West Bank, where it has facilitated mass surveillance and increased arrests. Intelligence sources explained that the AI helps flag individuals expressing anger toward the occupation or discussing possible attacks against "Israeli" forces and settlers.
Palestinian digital rights groups have condemned the use of AI-driven surveillance. Nadim Nashif, director of 7amleh, criticized the "Israeli" occupation for turning occupied Palestinians into “subjects in a military AI laboratory,” arguing that such technology enables deeper oppression and control, reinforcing the apartheid-like conditions Palestinians face.
In a parallel context, experts warn that integrating AI into military intelligence without proper safeguards could have dangerous consequences, as AI models frequently make errors, misinterpret intent, and generate misleading conclusions, which could lead to unjustified targeting and wrongful arrests.
Brianna Rosen, a former White House "national" security official, cautioned that while AI could help detect threats, it also creates a risk of false accusations and flawed decision-making. “Mistakes are inevitable, and some of these mistakes could be life-threatening,” she said.
Concerns about AI’s role in military operations were underscored by an "Israeli" airstrike in Gaza in November 2023, which reportedly killed four civilians, including three teenage girls.
An investigation by the Associated Press found that AI may have influenced the intelligence officers’ decision to authorize the strike based on faulty analysis.
When asked how the IOF ensures accuracy and fairness in AI-based intelligence, military officials refused to disclose details. However, an IOF spokesperson insisted that “every use of technology follows a meticulous process and involves human oversight.”
The "Israeli" occupation's aggressive deployment of AI-driven intelligence appears to push boundaries further than its Western allies, experts say.
Moreover, while AI tools are being adopted by intelligence agencies worldwide—including the CIA’s AI-powered open-source monitoring system—the "Israeli" entity's approach differs in its broad application of AI for mass surveillance and military targeting.
Human rights groups warn that using highly personal, intercepted communications to train AI is not only invasive but also a violation of privacy and international law.
Comments
