For Document’s Fall/Winter 2024–25 issue, Sophia Goodfriend examines the dark side of big data in Palestine
Yazan Khalili’s installation Medusa: Don’t Be a Stranger consists of six cracked screens suspended by wire in mid-air. They hang behind one another, a row about four feet from the ground. Biometric scans, biographical information, video interfaces, and social media profiles flicker across the marred glass surfaces. Some monitors have iPhones and tablets plastered on top, further complicating the images. To see the amalgamation of text and images projected across the six-channel video, visitors must walk between the monitors and stare into the camera lenses hovering over the screens, as if offering up their own data to the artwork’s portrayal of machine-learning systems.
Khalili is a celebrated architect and visual artist who has worked in Palestinian cultural institutions for years. He says Medusa is meant to depict digital media as a “political structure,” something suturing material to ideology through endemically human industrial formations.
When Kahlili first exhibited Medusa at the KW Institute for Contemporary Art in Berlin in 2020, critics said the work offered a counterpoint to the digital age’s fantasies of frictionless connectivity, laying bare the amalgamation of human labor undergirding AI-powered systems like facial recognition technology. Yet after a year of war in Israel and Palestine, I see something else in the installation: one of the broken screens looks like it was shot with a bullet; a tablet affixed atop another is reminiscent of the devices carried by Israeli border police in Palestinian cities to scan civilian’s faces. Khalili conjures a bloody status quo lingering just below the user-friendly interfaces powering much of our digital lives. War, occupation, mass surveillance: these are the information age’s conditions of possibility. It is a foundation soaked in blood.
—
I met Isaac, an intelligence veteran, in a West Jerusalem café on a quiet Saturday morning in late May. We sipped iced coffee under an awning shading us from the heat wave. It was seven months into Israel’s war on Gaza. Upward of 30,000 Palestinians had been killed and millions more displaced in a protracted and bloody military offensive that had failed to achieve the military’s stated goals of decimating Hamas and bringing the remaining hostages home. Next to us, a table of reservists back from Gaza for the weekend rolled tobacco and knocked back pint glasses of draft beer. M-16 rifles were nestled between their knees or propped up against the graffitied table legs. An unremarkable scene. Isaac is also a reservist in the Israeli military—one of the over 350,000 mobilized after Hamas militants massacred more than 1,000 people in a historic security failure. Like many veterans I have interviewed for my academic work and reporting, Isaac spent the first few months of the war sitting in an intelligence base encouraged to use algorithmically generated targeting lists to help coordinate where and when bombs fell. A program called Lavender displayed lists of civilians who—because of the contacts in their phones, the content of their WhatsApp inbox, or their social media activity—had been greenlighted for assassination. Another, called Where’s Daddy, displayed alerts when those targets entered their family homes, helping to determine when and where the Air Force should strike. Over the next few months, “dumb bombs” dropped from the sky and explosives detonated by troops on the ground replaced universities, mosques, and apartment complexes with 500-foot-wide craters. The fabric of Palestinian life was scorched to earth.
“How did we get to a place where seeking a target to kill remotely feels little different than scrolling through profiles of high school friends on Facebook?”
Isaac, who spoke on the condition of anonymity, said the AI-powered targeting systems felt like any other search engine: type a name into a search bar and scroll through mountains of data seamlessly integrated into a user-friendly interface. The formal similarities are hardly a coincidence. Instead, they lay bare long-standing collaborations between civilian technology conglomerates and Israel’s military. Google provides some of the facial recognition algorithms powering classified surveillance databases that soldiers toggle between. Microsoft supplies speech-to-text software that expedites the work of surveilling and killing. The army uses Amazon cloud services to store troves of data used in lethal operations. These collaborations mean classified surveillance and targeting databases are even nicknamed after tech giants: Google Gaza or Facebook for Palestinians. “Like looking up a friend on social media,” Isaac admits, “they are familiar.”
How did we get to a place where seeking a target to kill remotely feels little different than scrolling through profiles of high school friends on Facebook? The AI systems deployed in Gaza are the apotheosis of a process set in motion in the mid-20th century, when early cyberneticians built up surveillance databases and rudimentary targeting systems for the United States’ Department of Defense (DoD). They hit the battlefield in the second half of the 20th century, when US troops scorched Vietnam to the ground, and were refined over four more decades of counterinsurgency warfare abroad and swelling surveillance at home. The Cold War made networked surveillance and killing a big business, largely bankrolled by the DoD. Slowly, innovations seeped into civilian markets—powering a revolution in personal computing, e-commerce, and dot com booms and busts, all predicated upon the expropriation of users’ information for corporate gain. In turn, civilian technology firms staked out new monopolies over mass surveillance and data analysis, which they sold back to governments. Underneath the user-friendly interfaces engineered by Google and Facebook employees was an enduring politics of death.
—
In the early 2000s, the CEO of PayPal, Peter Thiel, refashioned himself as an apostle of the military-industrial complex 2.0. His gospel was simple: re-engineer the algorithms powering platform capitalism for warfare. Thiel, along with others including businessman Alex Karp, who serves as CEO, founded Palantir, a start-up which ran troves of personal data through the same algorithms pinpointing credit fraudsters to hunt down terrorists on Middle Eastern battlefields.
Palantir promised to do what no technology firm had done before: leverage the civilian technology sector’s new monopoly over data analysis, pattern detection, and machine learning to revolutionize warfare, making military operations bloodless and precise. The product came at the cost of the privacy protections liberal democracies are supposed to enshrine. But Palantir’s early investors—namely the CIA—didn’t care; the power afforded by expansive surveillance databases was thrilling. Security states scrambled to drop cash onto an increasingly automated arms industry. For Thiel, Palantir was a realization of the “in-between space,” a vision of collaboration between militaries and Silicon Valley he had been boosting since 9/11. As the United States’ “war on terror” went global, Thiel promised Silicon Valley firms could develop and sell lethal systems back to governments and militaries struggling to keep up with the technology sector’s breakneck pace of innovation. The alliance was a return to Silicon Valley’s origins in a Cold War military-industrial complex, and Thiel said it would give the US and its allies an advantage over adversaries, so long as governments cultivated a welcoming climate for such operations. Features of conviviality included minimal regulations on data extraction, categorically denying civilians privacy protections, and relaxed oversight of AI development. Overpoliced cities in the United States, border zones in Europe, securitized regions of Northwest China, and the occupied Palestinian territories—spaces of exception, where civil liberties are non-existent—would be particularly hospitable.
Long a hub for military and security industries, by the late 2000s Israel would make the “in-between space” a national brand. Billions pumped into expanding military technology trained the next generation of start-up founders well-versed in military demands. Many secured lucrative contracts with an army eager to prototype and refine surveillance systems and weaponry across the occupied Palestinian territories. Politicians and military heads celebrated a revolving door between Israel’s booming start-up ecosystem and the army as the key to military prowess. Scandals surrounding boutique Israeli surveillance and weapons tech firms peddling their wares to foreign dictators, or eroding the rights of Palestinians, only boosted the country’s aspirational image as the World’s Ultimate Security State.
Palantir scored contracts with Israel’s government and military early on in the nation’s campaign to advertise itself as an ideal environment for beta-testing digital militarization. After Palantir opened a Tel Aviv office in 2015, Thiel and Karp met with Israeli military leaders regularly. They were keen on building up targeting databases capable of converting an unending stream of data extracted from the occupied Palestinian territories—emails, call logs, cell phone address books, WhatsApp messages, social media profiles, location stamps—into predictive targeting systems. At defense and security expositions, military leaders boasted that machine learning algorithms could pinpoint patterns that allegedly determined an individual’s likelihood to be associated with a militant group or carry out a violent act. Operatives sequestered in airconditioned military bases combed through lists of targets to determine who would live and who would die.
The Israeli army couldn’t do it alone. Ben, a veteran who served in an Israeli intelligence unit devoted to big data and machine learning in 2014, told me his military base hosted many private contractors. When we spoke in June, he said some of these technologists worked for international firms while others were paid by domestic boutique surveillance start-ups founded by veterans of elite Israeli intelligence units. From 9 am to 5 pm, the contractors waltzed around in jeans and t-shirts, building up predictive targeting systems and surveillance interfaces between lunch breaks and trips to the gym. “You could be sitting there in your uniform, and next to you is a civilian making six times your salary, commuting from Tel Aviv.” Ben said the “civilian tech vibe” made it easy to view the military as a networking opportunity for those eager to land a job in the country’s burgeoning technology sector. Sometimes his team would tour the Tel Aviv offices of the tech firms supplying services.
A decade-plus of contracting operations to private technology firms means that Israel relies on civilian suppliers to wage its wars, a fact army heads marshall as proof of military prowess. On July 17, 2024, Col. Racheli Dembinsky, commander of the IDF Center of Computing and Information Systems (Mamram), stood in a fluorescently lit conference room in Rishon LeZion, a suburb on the outskirts of Tel Aviv’s sprawl. She was one of the first speakers at a conference titled IT for IDF and her speech was leaked to the public by 972 Magazine in early August. As reservist soldiers and civilian technology workers sipped espresso beneath blasting AC vents, Dembinsky praised corporate tech conglomerates for offering cloud computing services and AI systems that have facilitated the IDF’s unprecedented bombardment of Gaza since mid-October of last year. “The crazy wealth of services, big data and AI—we’ve already reached a point where our systems really need it,” Dembinsky said. “Working with these companies has granted the military very significant operational effectiveness.”
—
The contracts Dembinsky was referring to have been touted by corporations eager to claim their wares as battle tested. Since October 7, Palantir’s Alex Karp has signed over a host of surveillance and targeting systems that guide the not-so-precise aerial strikes reducing swaths of Gaza to rubble. Meanwhile, Google, Amazon, and Microsoft have offered even more cloud computing infrastructure and AI applications to intensify Israeli military operations in Gaza. Many of these systems will go on to circulate across global markets. Over the years, Israel’s staging of its occupation of Palestine as collaboration with foreign and domestic tech companies has fueled a whirlwind of civilian innovations, from biometric cameras refined by Palestinians doubling as thermal cameras used to monitor infectious disease to drones cratering apartment complexes in Gaza City repurposed to deliver packages in Los Angeles. If “data is the new oil,” as Peter Thiel has said many times, there is no better time to extract it than during war.
“Scandals surrounding boutique Israeli surveillance and weapons tech firms peddling their wares to foreign dictators, or eroding the rights of Palestinians, only boosted the country’s aspirational image as the World’s Ultimate Security State.”
The industrial scale of automated warfare today implicates many in the violence unfolding in Gaza: not only Israeli soldiers and civilian technology workers but also everyday users scattered across the world. Some of us sit in Silicon Valley technology complexes, engineering the cloud servers or databases informing lethal operations. More of us offer up the data and supply the free labor that trains and refines the algorithms driving bombing campaigns abroad each time we go online, even if you caption your selfies with the words “Free Palestine.” Selfies and search engine queries feed the surveillance databases and predictive models undergirding lethal weapons systems. Broad swaths of the world’s population is, in some way or another, what the media scholar Tung-Hui Hu has called “freelancers for the state’s security apparatus.”
The immersive nature of Khalili’s Medusa: Don’t Be a Stranger drives this entanglement home. You walk between screens depicting facial scans and social media profiles. Something addictive in the design draws you closer to the interface. The compulsion to touch and interact with the work evokes, in Khalili’s words, “the role our face plays in the construction of these platforms.”
It is a sophisticated rendering of techno-optimism’s cruelty: the way the technologies offering connectivity and solidarity can also maim and kill. And it is a point that resonates today more than ever, as users offer up an unending data-stream for the corporate tech conglomerates and private surveillance firms driving Israel’s protracted bombardment of Gaza. A contemporary Medusa, today’s data-driven economy turns entire cities into piles of stone.
When we spoke over Zoom in July, Khalili told me his relationship to digital platforms was pessimistic at best. “I don’t think efforts to reform these technologies will prevent them from feeding systems of power,” he said. Instead, he has devoted his time to building up tools and spaces beyond the reach of the militarized technology sector. Radio Alhara, a communal media project Khalili helped launch in Palestine in 2020, was one such project. The radio station and online platform offers connectivity while, in Khalili’s words, refusing to “individualize us for larger political structures.” It is a model of defiance many of us bound up in warfare’s supply chain might also follow.