The essays in this series were written during the summer of 2024, and may not fully address rapidly escalating violence in the region.

Israel’s war in Gaza is driven by the algorithmic systems of the day. Big data and machine learning systems—glossed as Artificial Intelligence (AI)—cull through troves of information extracted from drones, cameras, checkpoints, mobile phones, and digital platforms covering the Occupied Palestinian Territories. One, called Lavender, generates endless recommendations of who may be affiliated with a militant group. Another, called Where’s Daddy, determines where the army should kill them. A host of other semi-automated applications help decide when intelligence operatives should sign off on drone strikes in refugee camps, which universities pilots should decimate with 2,000 kg bombs, or what critical infrastructure ground troops should light up with explosives. Over ten months of unending siege by land and air, AI has helped Israeli troops scorch Palestinian life to the ground.

The carnage in Gaza has compelled the International Court of Justice to charge Israel with genocide; the International Criminal Court has applied to issue arrest warrants against top Israeli officials for committing war crimes. Yet those calling the shots say the over forty thousand killed, hundreds of thousands injured, and millions of Palestinians displaced is proof the army has built its AI powered killing capacities to scale. “This is unprecedented in the history of modern armies, that we do not lack something to attack,” the head of Israel’s targeting unit, identified as Col A, boasted of the number of algorithmically generated targets hunted down and killed with AI-assisted weapons systems across the strip in April. “Bring thousands more pounds of ammunition, we have blasted the bottleneck wide open,” he said.

I spent years tracking the development and impact of the increasingly automated systems ravaging Palestine today, speaking to Israeli intelligence veterans and reservists and Palestinians living under a technologized military siege.[1] I found that alongside the politicians bent on annexation, the messianic generals eager to see war continue, and the soldiers turned founders who say more data and better algorithms are the only ways to ensure Israeli security, broad swaths of the global technology industry drive regional war and occupation. White collar technicians, university researchers, waged laborers, and pedestrian users of platforms power a vast algorithmic circuit driving violence in Gaza and beyond. As warfare drags on, with the support of many Western governments and corporate heads, ethnographic attention to these supply chains reveals how many more are implicated in the violence.

While dominant framings of automated conflict omit their human foundations, anthropology tells us that the algorithms driving the violence in Gaza are sociotechnical systems, powered by very ordinary circuits of human labor and information.[2] Family photos uploaded to Google Images help Silicon Valley developers train the facial recognition algorithms that Israeli troops use to determine who should be disappeared inside military prisons for months on end. Microsoft engineers build up speech to text software that the army uses to monitor thousands across Gaza, identify targets, and justify indiscriminate strikes. Amazon workers maintain the cloud computing infrastructure that hosts a dizzying amount of data collected from phone taps, CCTV cameras, license plate scanners, body cams, and drones covering the Occupied Palestinian Territories. Palantir engineers tinker with the recommendation systems that paint a military campaign dictated largely by vengeance over with a veneer of technical efficiency. “In practice there are no unsupervised algorithms,” Nick Seaver writes, “If you cannot see a human in the loop, you just need to look for a bigger loop.”

Tracing these loops turns our attention to the ordinary, and often unwitting, technology workers driving violence unfolding thousands of miles away. Doing so is not to deny that power hungry politicians, belligerent military heads, and foreign lawmakers signing over arms and military support play a pivotal role in regional war. My point, however, is that their policies and practices gain force through flows of technology, labor, and expertise that stretch from fortified bases outside of Tel Aviv to open floor plan offices in London and San Francisco. Something so pedestrian as riding a shuttle bus to a technology campus, tapping into an office, and typing on a laptop implicates many in war crimes unfolding thousands of miles away.

Many of the white-collar professionals bound up in this supply chain are insulated from the consequences of their actions thanks to an information-driven economy that abstracts away its own conditions of production.[3] The brutal entailments of data collection and refinement are papered over with mythologies of technical autonomy and claims of inevitability. Such myths are, as Shoshanna Zuboff writes, “designed to render us helpless and passive… resolutely protecting power from challenge.”

Some technology workers at Google, Microsoft, and Amazon are peeling back this obfuscation by protesting contracts between their employers and militaries, including Israel and the United States. They refuse to maintain the cloud infrastructure or develop the applications undergirding AI powered warfare, disrupting board meetings or simply walking out on business as usual. As Israeli politicians and generals determining when and where the bombs fall show no sign of wanting a ceasefire, American and European lawmakers fail to even condition military aid and arms sales, and corporate CEOs sign over lethal systems to bolster a lucrative AI arms race, such acts of dissent offer an ethnographic reminder. Many more beyond the region are implicated in this violence and many more can demand a different status quo.

Footnotes

[1] My dissertation, Algorithmic Dispossession: Automating Warfare in Israel and Palestine, is an ethnographic portrait of how artificial intelligence (AI)—particularly AI-powered surveillance and weapons systems—is transforming what it means to wage and live with war across Israel and Palestine. I leveraged over three years of multi-sited ethnographic fieldwork to read the entrenchment of regional violence through the proliferation of big data analytics and machine learning technologies.

[2] AI boosters and detractors like to endow AI with an autonomy of its own, framing the advent of automated warfare as inevitable. The head of data science and machine learning in the Israeli intelligence corps celebrates AI as “data science magic powder” that gives the army an edge over its adversaries. Pundits decry such systems as “dystopian killing machines.”

[3] Undoing such abstraction has long been a core tenant of the anthropology of science and technology, one modelled in foundational ethnographies of computing and more recent work.