At 19, when I began drafting my webcomic, I had just been flung into adulthood. I felt a little awkward, a little displaced. The glittering veneer of social media, which back then was mostly Facebook, told me that everyone around me had their lives together while I felt like a withering ball of mediocrity. But surely, I believed, I could not be the only one who felt that life was mostly an uphill battle of difficult moments and missed social cues.
I started my webcomic back in 2011, before “relatable” humor was as ubiquitous online as it is today. At the time, the comics were overtly simple, often drawn shakily in Microsoft Paint or poorly scanned sketchbook pages. The jokes were less punchline-oriented and more of a question: Do you feel this way too? I wrote about the small daily struggles of missed clock alarms, ill-fitting clothes and cringe-worthy moments.
I had hoped, at most, for a small, niche following, but to my elation, I had viral success. My first comic to reach a sizable audience was about simply not wanting to get up in the morning, and it was met with a chorus of “this is so me.” I felt as if I had my finger on the pulse of the collective underdog. To have found this way of communicating with others and to make it my work was, and remains, among the greatest gifts and privileges of my life.
But the attention was not all positive. In late 2016, I caught the eye of someone on the 4chan board /pol/. There was no particular incident that prompted the harassment, but in hindsight, I was a typical target for such groups. I am a woman, the messaging in the comics is feminist leaning, and importantly, the simplicity of my work makes it easy to edit and mimic. People on the forum began reproducing my work and editing it to reflect violently racist messages advocating genocide and Holocaust denial, complete with swastikas and the introduction of people getting pushed into ovens. The images proliferated online, with sites like Twitter and Reddit rarely taking them down.
A recent example of my work
For my comics, I keep things simple — punchy drawings, minimal text — because I like my ideas to be immediately accessible. I write a lot about day-to-day life. My pets, their personality quirks and my love for them are common themes.
The ways my images were altered were crude, but a few were convincing. Through the bombardment of my social media with these images, the alt-right created a shadow version of me, a version that advocated neo-Nazi ideology. At times people fell for it. I received outraged messages and had to contact my publisher to make my stance against this ultraclear. I started receiving late-night calls and had to change my number, and I got the distinct impression that the alt-right wanted a public meltdown.
At one point, someone appeared to have made a typeface, or a font, out of my handwriting. Something about the mimicking of my handwriting, streamlined into an easily accessible typeface, felt particularly violating. Handwriting is personal and intimate to me, a detail that defines me as much other unique traits like the color of my eyes or my name. I can easily recognize the handwriting of my family members and friends — it is literally their signature. Something about this new typeface made me feel as if the person who had created it was trying to program a piece of my soul.
One of the images created by the alt-right
The harassment shocked the naïveté out of my system. A shadow me hung over my head constantly, years after the harassment campaign ended. I had been writing differently, always trying to stay one step ahead of how my drawings could be twisted. Every deranged image the alt-right created required someone sitting down and physically editing or drawing it, and this took time and effort, allowing me to outpace them and salvage my career.
And then along comes artificial intelligence. In October, I was sent via Twitter an image generated by A.I. from a random fan who had used my name as a prompt. It wasn’t perfect, but the contours of my style were there. The notion that someone could type my name into a generator and produce an image in my style immediately disturbed me. This was not a human creating fan art or even a malicious troll copying my style; this was a generator that could spit out several images in seconds. With some technical improvement, I could see how the process of imitating my work would soon become fast and streamlined, and the many dark potentials bubbled to the forefront of my mind.
I felt violated. The way I draw is the complex culmination of my education, the comics I devoured as a child and the many small choices that make up the sum of my life. The details are often more personal than people realize — the striped shirt my character wears, for instance, is a direct nod to the protagonist of “Calvin and Hobbes,” my favorite newspaper comic. Even when a person copies me, the many variations and nuances in things like line weight make exact reproductions difficult. Humans cannot help bringing their own humanity into art. Art is deeply personal, and A.I. had just erased the humanity from it by reducing my life’s work to an algorithm.
A.I. text-to-image generators such as Stable Diffusion, Midjourney and DALL-E exploded onto the scene this year and in mere months have become widely used to create all sorts of images, ranging from digital art pieces to character designs. Stable Diffusion alone has more than 10 million daily users. These A.I. products are built on collections of images known as “data sets,” from which a detailed map of the data set’s contents, the “model,” is formed by finding the connections among images and between images and words. Images and text are linked in the data set, so the model learns how to associate words with images. It can then make a new image based on the words you type in.
An A.I. generated image that I created
when I used my name as a prompt
The data set for Stable Diffusion is called LAION 5b and was built by collecting close to six billion images from the internet in a practice called data scraping. Most, if not all, A.I. generators have my work in their data sets.
Legally, it appears as though LAION was able to scour what seems like the entire internet because it deems itself a nonprofit organization engaging in academic research. While it was funded at least in part by Stability AI, the company that created Stable Diffusion, it is technically a separate entity. Stability AI then used its nonprofit research arm to create A.I. generators first via Stable Diffusion and then commercialized in a new model called DreamStudio.
So what makes up these data sets? Well, pretty much everything. For artists, many of us had what amounted to our entire portfolios fed into the data set without our consent. This means that A.I. generators were built on the backs of our copyrighted work, and through a legal loophole, they were able to produce copies of varying levels of sophistication. When I checked the website haveibeentrained.com, a site created to allow people to search LAION data sets, so much of my work was on there that it filled up my entire desktop screen.
Many artists are not completely against the technology but felt blindsided by the lack of consideration for our craft. Being able to imitate a living artist has obvious implications for our careers, and some artists are already dealing with real challenges to their livelihood. Concept artists create works for films, video games, character designs and more. Greg Rutkowski, a hugely popular concept artist, has been used in a prompt for Stable Diffusion upward of 100,000 times. Now, his name is no longer attached to just his own work, but it also summons a slew of imitations of varying quality that he hasn’t approved. This could confuse clients, and it muddies the consistent and precise output he usually produces. When I saw what was happening to him, I thought of my battle with my shadow self. We were each fighting a version of ourself that looked similar but that was uncanny, twisted in a way to which we didn’t consent.
It gets darker. The LAION data sets have also been found to include photos of extreme violence, medical records and nonconsensual pornography. There’s a chance that somewhere in there lurks a photo of you. There are some guardrails for the more well-known A.I. generators, such as limiting certain search terms, but that doesn’t change the fact that the data set is still rife with disturbing material, and that users can find ways around the term limitations. Furthermore, because LAION is open source, people are creating new A.I. generators that don’t have these same guardrails and that are often used to make pornography.
In theory, everyone is at risk for their work or image to become a vulgarity with A.I., but I suspect those who will be the most hurt are those who are already facing the consequences of improving technology, namely members of marginalized groups. Alexandria Ocasio-Cortez, for instance, has an entire saga of deep-fake nonconsensual pornography attached to her image. I can only imagine that some of her more malicious detractors would be more than happy to use A.I. to harass her further. In the future, with A.I. technology, many more people will have a shadow self with whom they must reckon. Once the features that we consider personal and unique — our facial structure, our handwriting, the way we draw — can be programmed and contorted at the click of a mouse, the possibilities for violations are endless.
I’ve been playing around with several generators, and so far none have mimicked my style in a way that can directly threaten my career, a fact that will almost certainly change as A.I. continues to improve. It’s undeniable; the A.I.s know me. Most have captured the outlines and signatures of my comics — black hair, bangs, striped T-shirts. To others, it may look like a drawing taking shape.
I see a monster forming.
Sarah Andersen is a 30-year-old cartoonist and the illustrator of a semiautobiographical comic strip, “Sarah’s Scribbles.” Her graphic novel “Fangs” was nominated for an Eisner Award.
The Times is committed to publishing a diversity of letters to the editor. We’d like to hear what you think about this or any of our articles. Here are some tips. And here’s our email: email@example.com.
Follow The New York Times Opinion section on Facebook, Twitter (@NYTopinion) and Instagram.