In an open letter, Molly Crabapple and Marisa Mazria Katz advocate against the use of the technology in journalism, a medium long defined by human storytellers
One year ago, OpenAI CEO Sam Altman tweeted an image of two teddy bears on the moon. “DALL-E 2 is here,” he wrote in the caption. “It can generate images from text… It’s so fun, and sometimes beautiful.”
In the year since, it’s proven to be much more than that. AI models have taken the world by storm, promising to revolutionize industries as diverse as art, culture, science, and medicine—and with the boom have come debates about the ethics of its use, at a time when such models are poised to displace human workers.
It’s why reporter, author, and artist Molly Crabapple teamed up with Marisa Mazria Katz—executive director of the Center for Artistic Inquiry and Reporting—to publish an open letter calling for the restriction of AI-generated illustration in journalism, arguing that, if the technology is left unchecked, the results will radically reshape the media landscape. “These generative tools can churn out polished, detailed simulacra of what previously would have been illustrations drawn by the human hand. They do so for a few pennies or for free, and they are faster than any human can ever be,” the letter reads, noting that programs like Midjourney, Stable Diffusion, and DALL-E are built using millions of copyrighted images by human artists. In their words, “Silicon Valley is betting against the wages of living, breathing artists through its investment in AI.”
Above The Fold
Nick Hornby: Grand Narratives and Little Anecdotes
Falls the Shadow: Maria Grazia Chiuri Designs for Works & Process
The Present Past: Backstage New York Fashion Week Men’s Spring/Summer 2018
Embodying Rick Owens
According to Crabapple, the widespread use of these tools could widen the gap between blue-chip artists and blue-collar workers—further establishing the ability to make art for a living as a luxury enjoyed by an elite few, rather than a feasible career path for the working class. Among the creatives who most fear replacement are illustrators—who, unlike gallery artists, are tradesmen who work on commission, generating art for books and magazines to pay the bills. “A gallery artist might make one piece and sell it to a rich collector for a lot of money—whereas an illustrator makes many drawings, often for very little money,” Crabapple says, noting that illustrators are often excluded from the debate about the impact of AI on creative communities—and that comparing the interests of gallery artists to those of illustrators is like equating the work of a professional tailor to that of a designer: They may both work in the field of fashion, she says, but why would the guy hemming your pants have the same economic interests as Alexander McQueen?
In January, a group of artists filed a class-action lawsuit against Stability AI, claiming that the company violated the copyrights of millions of creators by using images of their work to train its algorithm. In the months since, nonconsensual use of data has become a topic of frequent debate for creatives and corporations alike, with stock image behemoths like Getty Images filing their own lawsuits against Stability AI—which created the popular machine learning model Stable Diffusion—for unlawfully appropriating millions of photos.
“Comparing the interests of gallery artists to those of illustrators is like equating the work of a professional tailor to that of a designer: They may both work in the field of fashion… but why would the guy hemming your pants have the same economic interests as Alexander McQueen?”
This has prompted the invention of tools like Have I Been Trained?, which offers artists the opportunity to opt-out of the public data sets used to train machine learning models—a practical intervention, intended to put power in the hands of artists while the new rules of intellectual property are hammered out. But, in Crabapple’s view, these initiatives still require time-consuming administrative legwork to manually submit such requests—creating a barrier to entry for the working-class artists who most need its protection. Katz agrees: “There’s a bigger ethical conversation we need to have around the use of AI in media,” she says, noting that while this technology could certainly be utilized by creatives, she worries that many more companies will use it in lieu of hiring human artists.
The algorithms underpinning such programs were trained on a massive dataset composed of over 5.8 billion images scraped from the internet—including sites such as DeviantArt, and even private medical records—without consent. This was made possible by LAION, a German company that used its nonprofit status as a “fig leaf” to obscure the co-opting of billions of images; then, it handed the dataset over to Stability AI, the for-profit company behind Stable Diffusion.
In Crabapple’s view, there’s something fundamentally different about the advent of technologies like the camera or printing press, and the development of AI art generators: “Cameras aren’t built on paintings; the people who invented cameras did not steal a bunch of work from living artists in order to create a thing that could replace them,” she says, arguing that not only is the misappropriation of millions of works of art ethically wrong, but the outsourcing of creative work to machine learning algorithms is likely to bring about a more impoverished visual culture: one that trains consumers to accept ersatz versions of art and illustration, cobbled together by algorithms that lack the ingenuity, personal vision, subjectivity, and humanity of living artists. For this reason, she rejects the pro-tech bias embedded in the language of technological “advancement”—pointing out that using AI image generators to create magazine illustrations doesn’t necessarily advance the field of art, so much as it makes it easier for big companies to cut costs.
“Historically, technological developments under capitalism are developed with the money of capitalists, for the goal of deskilling workers: disempowering them by making them less able to assert their rights, more replaceable, more interchangeable, and more alienated.”
The practice of eliminating the need for human labor—and humane working conditions—is not new. “Historically, technological developments under capitalism are developed with the money of capitalists, for the goal of deskilling workers: disempowering them by making them less able to assert their rights, more replaceable, more interchangeable, and more alienated,” Crabapple says, citing examples like the invention of the self-acting spinning mule: a machine deliberately commissioned by factory bosses to break the power of striking workers, and reduce their ability to hold production ransom while negotiating fair labor conditions.
These days, even those at the forefront of AI development are having second thoughts. Last week, the so-called ‘Godfather of AI,’ Geoffrey Hinton, resigned from Google, lamenting that the technology he helped develop could “take away more than [drudge work].” He’s only the latest in a series of industry leaders-turned-AI doomers, thousand of whom signed an open letter calling for the halt in development of generative AI, citing risks to society and humanity should it continue to be developed without the appropriate guardrails.
“People like to say that this is inevitable: that once something has been created, we need to be using it or preparing for its use,” Crabapple says, noting that such rhetoric is already being used to suppress dissent against the use of AI. “Nothing humans do is inevitable. You don’t have to buy or use these products, just because they’ve been created.”
To Crabapple, the fact that these generators are trained on ill-gotten data represents not just an obvious ethical problem, but a potential solution in the form of algorithmic disgorgement, an FTC penalty that has historically been wielded against those who committed this offense—including companies like Weight Watchers, which was forced to destroy algorithms illegally trained on children’s data. “There’s a legal precedent for this, and people don’t talk about it because they want to be like, Oh, the genie’s out of the bottle—you can’t undo it,” Crabapple says. “In my best-case scenario, the FTC would force these companies to destroy their algorithms, and they would have to retrain their tools using fully consenting data.”
This is the time to fight for such an outcome, Crabapple says—because, while many contemporary artists may be able to integrate AI technologies with their own creative practice, the same is not true of the millions of blue-collar workers whose jobs are likely to be upended by the AI boom. “We need to fight, because the risk is existential,” she says. “People say ‘adopt or die,’ but there is no adoption for illustrators—there’s only die.”