The artist on CIA-funded facial recognition technology, images in the post-truth era, and why AI is its own form of politics
“Can you hear me?” Trevor Paglen asks me over a light crackling on the line. “I’m in rural California for a project, so I’m calling on some weird 4G modem or something.”
That Paglen is calling from the road doesn’t surprise me; he has made a career out of examining the unknown, from NSA-tapped fiber optic cables on the ocean floor to CIA black sites to the hidden world of images humans weren’t meant to see. Fusing art, science, and investigative journalism, his work casts a roving eye on the invisible infrastructure of an increasingly networked world and the political histories that produced it. By turning his lens on the 21st century surveillance state, Paglen documents the unsettling ubiquity of these technologies and makes the reality of our ever-expanding technosphere more legible.
In a sense, Paglen is as much a cartographer of power relations as of technology: by deconstructing the internal mechanisms of AI systems, he lays bare their politics and reveals what’s at stake. “The overwhelming majority of images are now made by machines for other machines, with humans rarely in the loop. If we want to understand the invisible world of machine-machine visual culture, we need to unlearn how to see like humans,” writes Paglen in a 2016 essay, suggesting that the machine-readable image has covertly transformed our visual culture into one independent of the human seeing-subject. Paglen argues that the automation of vision on a massive scale enables images to operate as powerful regulatory tools for specific race and class interests, while concealing their true nature with claims of ‘objectivity.’
As Black Lives Matter protests continue across the United States, issues of surveillance, policing, and state power have been thrown into sharp relief, with facial recognition technology becoming a lightning rod issue among privacy advocates. Last week, Portland, Oregon passed landmark ordinances banning the use of facial recognition technology by the city and private entities—the first legislation of its kind within the United States. Trevor Paglen joins Document to discuss images in the post-truth era, police abolition, and the future of privacy in America.
Camille Sojit Pejcha: In new exhibitions Opposing Geometries and Bloom, you unpack how modern AI systems are trained to interpret photographs of people, using the history of phrenology and race science to talk about how categorizing and labeling images is its own form of politics.
Today, we’re facing more uncertainty than ever about the legitimacy of what we see; reality has become divergent and fractal as fake news proliferates algorithmically and technology advances to the point that even video footage can be fabricated. How is the anxiety of the post-truth moment shaping our response to images, and is this an opportunity to rethink that relationship?
Trevor Paglen: That’s a fantastic question that is at the absolute core of the work, so you’re going to get a dissertation-length answer.
As I see it, society is undertaking the massive project of reimagining the relationship between image and meaning. We are in the midst of a massive civil rights movement, which is contesting the way appearance has been used to perpetuate all sorts of violence and re-negotiating the role of the image. Coronavirus has also changed the meaning of the image by changing our relationship to the environment around us; our sociability has also been completely forced onto online platforms, which are designed to harvest as much information as possible from our images using artificial intelligence.
Now that we have sophisticated machine learning algorithms that can classify people according to appearance, we’re incentivized to believe there is some kind of stable relationship between the image of something its essence—but I want to disrupt this assumption, because there are usually bad politics attached.
I did a project called Image Net Roulette, which looks at the most widely used training set in artificial intelligence research—a giant database of images and labels, meant to be used to teach AI systems how to recognize an apple or an orange or a pear or a strawberry or a tractor. It’s just a bunch of categories of images, but when it comes to labels for people, there are not just things like scuba diver, cheerleader, priest—the “descriptions” meant to categorize images very quickly turn into judgments, and you see things like “bad person, slattern, slut,” all kinds of racial slurs. The very existence of the category is misogynistic or ableist or racist, yet what they’ve done is actually put pictures of people into those categories, the presumption being that you could train the AI system how to recognize whether somebody is a bad person or not by looking at their picture.
The training of contemporary artificial intelligence systems harkens back to 19th century pseudosciences like phrenology—you know, trying to figure out whether somebody is a criminal or not based on the shape of their head. The thing is that, beyond objects, appearance actually tells you very little; for instance, there is gender recognition built into AI systems, but it’s based on the faulty premise that you can tell how somebody identifies based on their image. These are the kinds of assumptions we as a society want to question now, but at the same time, they are being increasingly built into our technological infrastructure.
Camille: Yeah, it feels like a conversation between technology and biological determinism. You have said that to label images is a political intervention in its own right; considering the training of algorithms in this light, what do you see as the way forward in reducing harm from these systems? Do you feel there is a future where technology could be trained to combat the status quo instead of perpetuating it?
Trevor: This is a very hotly debated subject right now. There is a massive industry at this point trying to design machine learning and computer vision systems that are fair or neutral or unbiased. And you can imagine why there is a huge amount of work being put into that, because a lot of people are starting to understand the degree to which machine learning systems do perpetuate injustice.
Any system of classification is created from a particular point of view—they will always have politics baked into them [because] when you teach a machine to recognize something, you’re also teaching it not to recognize something.
Any system of classification is created from a particular point of view—they will always have politics baked into them. There is no neutral ground to stand on, because the whole point of these schemes is to organize the world into concepts that are useful in one way or another… because when you teach a machine to recognize something, you’re also teaching it not to recognize something. Training images can teach us a lot about what counts as useful information, which in turn tells us the technical logics of how the system works.
Is there a way to create systems that don’t fall into these ideological traps? My position is that this is a fool’s errand—to decide the meaning of images is a political act, and there is always going to be bias built in. The solution is to not use computer vision systems or machine learning systems in certain contexts where it can be weaponized. I think racism is a fundamental feature of these systems; to make facial recognition technology ‘apolitical’ is to misunderstand what the system is.
Camille: Yeah, it definitely reminds me of the conversation around police abolition—the idea that reform is impossible, because racism is a feature of the system.
Trevor: I totally agree with you—abolition is such a powerful provocation because it forces you to define the true function of the police, and consider what other structures we can put in place to help address our problems as a society. That’s a very creative proposition, and a very radical one.
Camille: Speaking of radical propositions, I can’t help but think of Autonomy Cube, your sculpture that allows users to surf the web anonymously via the Tor network. It evokes the utopian ideal of the early Internet, a vision of freedom that feels almost impossible now, in the age of big data and mass surveillance. At the same time, similarly radical experiments—autonomous zones, mutual aid networks—are cropping up across America, trying to reimagine social infrastructure in the face of an oppressive government. It feels like we’ve entered into a liminal state… With society as we know it on pause, reality is more than ever up for grabs, and pushing at the boundaries of what’s possible can help us denaturalize oppressive systems even if the interventions themselves aren’t sustainable.
Trevor: Yes, those are the kind of radical propositions that I try to make—they’re almost fantastical gestures, not whole solutions. But to me, that’s what the exercise is: de-naturalizing the assumptions built into these systems, and considering what—or who—they’re serving.
Camille: Your work often addresses invisible systems in an effort to make the infrastructure behind them—their technical logics, their motivations—more legible. Can you speak a little to the process of reverse-engineering these systems? What’s the craziest thing you tried to build backwards?
To really understand the politics of a system, we have to ask: What’s the historical foundation it’s built on? Whose interests funded it? Who wanted this technology to exist at that time, and why?
Trevor: A long time ago, I was looking at CIA black sites—that was crazy, trying to find disparate pieces of evidence and create a picture out of them. The AI stuff is very much like that, particularly because, with data sets, you’re looking at the substrate upon which the technical systems are built; training images can teach us what counts as useful information to a program, which in turn tells us the technical logics of how it works. To really understand the politics of a system, we have to ask: What’s the historical foundation it’s built on? Whose interests funded it? Who wanted this technology to exist at that time, and why?
Take the history of machine vision, for example. The first experiment in facial recognition was done in the 1960s by a guy named Woody Bledsoe, whose research was funded by front companies for the CIA. One of the ways that he attempted it was by measuring what are called facial landmarks—the diameter of the eyes, the dimensions of the nose, lips—all of which add up to something akin to a fingerprint. He wanted to create a set of measurements that every subsequent measurement could be compared against—a mathematical abstraction of the so-called ‘standard head,’ which they use as the baseline to measure individual people.
Recently, I went through Bledsoe’s archives and I was able to reconstruct that standard head digitally and make a sculpture out of it, which is being shown in Bloom. I’m interested in what the actual history tells us about the product… In my opinion, Bledsoe working for the CIA is integral to the evolution of facial recognition. Why did they want this surveillance technology to exist? Well, to amplify the power of intelligence agencies and enhance the coercive apparatus of the state.
Camille: Not great, yeah.
[both laugh]
Camille: What do you see as the future of privacy in America, and its relationship to democracy?
To me, privacy is something very atomized; we think about it as something to be chosen individually. What I think about instead of privacy as an individual right is anonymity as a public resource.
Trevor: I am not generally a fan of the word privacy. To me, privacy is something very atomized; we think about it as something to be chosen individually. What I think about instead of privacy as an individual right is anonymity as a public resource. Historically, there are certain aspects of our everyday life that we allow to be subject to the assessment of the state or workplace; you consent to having your performance measured at work, and that entails a level of surveillance. We submit certain information to the state for drivers licenses and passports. But we also had broad sections of everyday life that, for political reasons as well as technical reasons, have historically been excluded from this architecture—and that’s where these modern privacy concerns come in, as technology makes new levels of surveillance possible. Before the advent of the internet, no corporation had the resources to monitor people in their homes, to know what television they like and don’t like. It wouldn’t have been efficient. That has all changed now. In a networked world, it’s now possible for those forms of measurement optimization to enter new aspects of our everyday lives that were previously inaccessible to state and corporate surveillance. So what you have is a profound loss of those sectors of society where there was some kind of anonymity—and I think preserving it is really important.
I think there’s an implicit recognition that for democracies to work, you don’t actually want the coercive arms of the state to be as efficient as they possibly can; we actually want to impede the efficiency of policing because we understand that comes at the expense of civil liberty. That’s why we have things like search warrants, for example; it’s why you need a warrant to tap somebody’s phone, or search their house.
When we look at the history of social movements, you have a history of people breaking laws. And those protests are made possible in part because policing was not efficient enough to individually target everyone who was engaged in mass movements. It’s important for the sake of democracy that we maintain that standard, or we face the loss of other possibilities.
Trevor Paglen: Opposing Geometries is open at the Carnegie Museum of Art from September 4 2020 through March 14, 2021 in Pittsburgh, Pennsylvania. Bloom is open at Pace Gallery from September 10 through November 10, 2020 in London.