Increase font size Decrease font size Reset font size

Gaza and AI warfare

BY A S A D B A I G 2025-04-26
IN the early hours of a night in Gaza last October, an entire family was erased by what the Israeli military referred to as a `precision strike`.

There was no warning, or any other elements of conventional warfare. Just an algorithm, part of an AI-driven system called Lavender, scanning metadata, flagging the location as a threat, and triggering death from the sky. Investigations have since revealed that Lavender flagged over 37,000 Palestinians for potential targeting, often with little to no human verification, a substantial number of which were non-combatants. But that didn`t stop the system from working in a cold, brutal, and `efficient`manner.

Welcome to the future of warfare currently being beta-tested in Gaza, where battle decisions are processed by codes and cold logic, and are no longer weighed by conscience or law.

Israel has long touted its military as among the most advanced in the world, and with artificial intelligence (AI) now integrated into its architecture of occupation, it has taken a lead that should terrify us all. From AI-powered surveillance towers and facial recognition checkpoints in the West Bank, to autonomous drones and predictive kill lists in Gaza, the Israeli military has embraced AI more as an executioner than a support tool.

The Lavender system, according to testimonies from Israeli intelligence officers, would identify suspected Hamas operatives using pattern recognition, facial data and digital footprints. Once flagged, the individual`s home could be bombed within minutes, often when children and relatives were most likely to be present. In many cases, there was no requirement to validate whether the flagged person was actively engaged in combat or posed any imminent threat. The decision to strike was rubber-stamped by junior officers relying on the system`s `high success rate` rather than actual intelligence.

Israel`s AI militarisation is deeply intertwined with the global tech industry. The irony is that big tech companies that boast about ethical AI principles are, in parallel, fuelling some of the most sophisticated systems of digital apartheid and mechanised killing on the planet. Amazon Web Services and Google, for instance, are jointly involved in Project Nimbus, a $1.2 billion cloud computing contract with the Israeli government, which provides the backbone for data processing and AI operations. These companies claim the project excludes military applications, yet whistle-blowers and experts have raised serious doubts, pointing to the blurred lines between civilian and military data in a heavily securitised state.

Gaza, by design, has become the perfect test lab. A densely populated, besieged strip of land where two million Palestinians including children live under permanent surveillance and recurring bombardment, and their movements, conversations and clicks are harvested by a regime that has turned them into data points. Israel`s experimentation with AI-based warfare is far more structural than incidental. The occupied, blockaded territory offers an unregulated environment to deploy technologies that would raise a living hell if tested on civilians anywhere else in the world.

The results are devastating. AI-driven targeting has contributed to one of the highest civilian death tolls in modern conflict. Entire residential buildings are flattened on the basis of `data pat-terns`. The Israeli military frames these as surgical strikes, but the body count and eyewitness accounts tell another story. The cold, clinical language of AI allows the Israeli military to mask the violence as objective. We are told that algorithms aren`t political, but they are trained on data shaped by occupation, bias, and years of dehumanisation. In Gaza, this prejudice appears to have been actively automated by AI.

One can safely assume that accountability has collapsed when a machine recommends a target and a human merely clicks `confirm`. Where does the responsibility lie? The developers who built it, the officer who trusts it, the tech company that profited from it, or the government that funds it? No one is held to account in the current legal vacuum.

International law is just as unprepared, maybe even rendered obsolete. The Geneva Conventions and other frameworks governing the laws of warwere built around human decisions, proportionality, and accountability. But what happens when war is waged by machines making probabilistic guesses? There is no binding international treaty on Lethal Autonomous Weapons System, ironically abbreviated as `LAWS`. However, the UN secretary general has called for the conclusion of a legally binding instrument by 2026, aiming to prohibit LAWS that operate without meaningful human oversight.

Meanwhile, Israel enjoys near-total impunity.

While the US provides billions in military aid, Big Tech players namely Amazon, Google and Microsoft, quietly support the development and deployment of AI tools used in the occupation. These companies face virtually no consequences for their role. While the US has introduced some export controls on advanced computing chips and certain AI models, these measures fall short of regulating the kinds of dual-use technologies currently enabling military operations in Gaza. There are no effective penalties for companies that supply infrastructure later used in targeted surveillance or algorithmic warfare. Regulatory bodies continue to promote ethical AI frameworks in theory, but remain silent when it comes to enforcement, especially when violations are cloaked in national security or foreign policy interests.

Unsurprisingly, this silence is profitable for all parties involved. As Israeli firms market their AI tools as `battle-tested`, they gain traction with other governments eager to adopt similar tactics.

As a result, we are witnessing what could be the globalisation of algorithmic warfare, with Palestinians as the first subjects.

And so the question is no longer just moral.

What does it mean for the future of humanity when machines are trained to kill with impunity and algorithmic `prediction` becomes a justifiable case for annihilation? The militarisation of AI does not just threaten Palestinians, it threatens us all. Once the machines are allowed to decide who lives or dies, the rest of the world is only ever a few datasets away from becoming the next battlefield.

The systems being perfected over Gaza today could soon be deployed in migrant camps, urban protests, or across other war zones. And if the world continues to watch in silence, the militarisation of AI is bound to be industrialised. • The wnter is the founder of Media Matters for Democracy