Debunking the AI- vs. Human-Learning Comparison

Why is it stealing when AI companies scrape work to train AI and not stealing when a human artist learns from other artists?

Debunking the AI- vs. Human-Learning Comparison
"Human?” (image created by author using weavesilk and filters)

The argument that generative AI learns in the same way artists do is often employed in an effort to convince the public that the data laundering that major AI companies like Stability AI, OpenAI/Microsoft, and Midjourney have engaged in is no different from a human artist learning from existing art or using it for reference and/or inspiration. It’s an argument that is usually put forward by people who are not artists but who have an interest in profiteering from the labor of artists without any agreed-upon royalties flowing back to the original artists. This argument also lends a semblance of sentience to generative AI. Most importantly, it is an argument that is meant to undercut artists’ copyright claims in derivative works generated by AI. 

So why is it stealing when AI companies scrape work to train AI and not stealing when a human artist learns from other artists?

Detailed Analysis

  1. Despite claims that AI learns in the same way as humans, (a) no one actually knows exactly how the human brain works and (b) it appears AI companies do not even know exactly how their own AI work. The only thing both human brains and AI have in common is that how they work is not fully understood. It is, therefore, not possible to say, with any scientific certainty, that AI learns in the same way humans learn.
  2. Humans, like all animals, are born with instinct. It is this instinct that compels a newborn baby to move toward its mother’s breast without any prior guidance. (This beautiful and tender phenomenon is known as the “breast crawl,” and it also occurs in other mammals.) I would argue that it is also this inborn instinct that compels sentient beings of other species, such as monarchs that migrate to specific trees in Mexico for the winter, to travel thousands of miles without the guidance of their parents or grandparents to the exact places generations before them went for a particular purpose (often mating and/or giving birth). Without external human input (data), AI, on the other hand, is, literally, a blank slate that has no such sensibilities. Again, and this is important, AI only gains the appearance of sentience because it is fed on products of human sentience—human data, including art. Humans, specifically (in the case of most of the popular generative AI) artists, will now and forevermore have provided practically all the value added. Data is what makes generative AI function. This point cannot be overemphasized. In fact, without original human data—if it is fed its own synthetic output—over time generative AI faces model collapse and will produce gibberish. This is startlingly unlike humans and is also why AI companies like OpenAI/Microsoft are now licensing data that is pure—that is not tainted by their own product—such as the treasure trove of original human writing that is owned by the Associated Press. Why should AI companies pay other companies to license data but not make the same arrangements with the independent human artists from whom it first scraped data? Basic economics dictates that those who add the most value should receive the most profit from what is created as a result of their contributions (in this case, coerced and not collaborative). Without fair and adequate compensation to those who contribute the most value added, generative AI is nothing more than a digital form of colonization. 
  3. Following on the previous point that humans are far more mysterious creatures who enter the world with that inkling of purpose known as instinct, most would agree that every person has a unique personality that is shaped by the particular blend of experiences that only that person has had. There is a you-had-to-have-been-there quality about each life, which means that none of us can ever fully comprehend another human being nor, really, another living creature. If a child goes on to become an artist, this aspect of their identity that is uniquely theirs will also seep into their art, regardless of whether they are influenced by other personalities along the way. Artificial intelligence (AI), by definition, has no personality except fake identities that may be constructed for AI by humans.
  4. No serious modern artist aspires to mimic other artists. We learn from others before us, but the goal (as every good art teacher will drill into a student’s mind and soul) is to find one’s own voice as an artist. Mimicking other styles for more than the sake of practice and presenting mimicry as one’s own art is a good way to lose credibility and respect, especially in a world in which there are not just artists but art critics with decades of knowledge of the fields in which they specialize and who pride themselves on staying up to date on what’s new and relevant. Unlike AI, human artists absorb what they have learned and subconsciously combine this with or filter it through their one-of-a-kind personalities and lived experiences to create art that is uniquely theirs. This is what gives meaning to art. It is only through finding one’s own voice that an artist is able to rise to the top of their field. However, even artists whose styles are not groundbreaking must bring something new to the table. This is how they are able to make a living. If they offer nothing different, they do not thrive(outside of the rigged fine arts market I discuss below) as independent artists. Having a unique voice, for example, has set many a singer apart from others. And no one will convince me that anyone could have portrayed a modern Sherlock Holmes the way Benedict Cumberbatch did or (fangirling here) Fran Fine the way Fred Drescher did except Benedict Cumberbatch and Fran Drescher, respectively. Regarding the question of whether actors produce intellectual property, this ability to portray a character in a unique way or, for a singer, to sing a song in a distinctive manner is why some writers and composers write pieces with specific actors and singers in mind. (Note that visual artists who work for corporations must adhere to the unique styles of those particular corporations or studios.) Paradoxically, being too unique can also marginalize an artist. There are numerous artists who were later said to be “ahead of the times” and many other great artists who wallow in obscurity. More on why this happens, especially now, below.
  5. Those who say all music sounds the same and argue that no art is unique are often not very invested in art (beyond seeing it as a commercial product). Artists, including some in the pop world who are forced to release commercialized products in order to survive as artists, have been victims of algorithms that push what is safe and mediocre to the forefront. (Note that safe music, for example, is not necessarily inoffensive music. It is music that we have been trained to listen to and have accepted as normal and even good.) Ironically, these algorithms are the product of Big Tech, which now stands to further benefit from having pushed more unusual and meaningful art to the margins. AI “art” exacerbates this problem, which is unsurprising since Silicon Valley’s and Wall Street’s interests do not align with humanity’s but with Silicon Valley and Wall Street.
  6. Even when human artists attempt to create work that seems like it was made by other artists, they can rarely do so convincingly especially when it is the work of an artist who is very highly skilled. We are rarely fooled by Elvis impersonators or by a friend who attempts to impersonate a celebrity—the shortcomings of such impersonations are part of the charm—and we have human experts who can establish the authenticity and provenance of very old works of art. AI, since it lacks human imperfection, can mimic even a highly expert artist’s style with enough precision that it is difficult to ascertain that it was not created by the original artist. Some, forgetting that this is only possible because the AI previously scanned many works of human art and has no desire or instinct to create anything new on its own nor even any understanding of what it outputs, find this impressive. I prefer what is genuine but imperfect to what is technically perfect but deeply fake (and lacking in the depth of meaning imbued in a great work by a traditional artist). AI’s ability to mimic the work of even the most highly skilled artists speaks to the fact that it takes a little from many pieces an artist has created—the equivalent of ripping off the entire body of the artist’s work—and so is able to fake not just style but substance. Generative AI actually does more damage to an artist than any human plagiarizing or attempting to, in a mostly traditional way, create an exact copy of a single work or even of a few works by an artist because it steals from across an artist’s body of work and is, therefore, able to act as a digital clone of that artist (and of any other human artist whose body of work it has been trained on). What is being copied, essentially, is the human, not just the art. This is dehumanizing, and it should be illegal to create a digital clone of anyone without their consent (opt in). Here’s how I described what generative AI actually does in my poem “Missing Digits.” This part is in the voice an AI “in the style of” a “white male rapper”:

I mean, logically, you take a little here, there,

it’ll add up to A LOT and none

will be the wiser. But the key

is to siphon the essence: extract

industriously: my art: synthesis, summary:

I eat culmination, voice, identity.

You feel me? Dawg, this is New God.

Aw, I’m just riffing: humansplaining,

cause I’m the realest MF after all.

Cut the hands of artists and say, ‘Fish!’”

  1. AI cannot forget. This is crucial since forgetting is part of what facilitates a human’s ability to create work that is sufficiently original rather than derivative. Forgetting is part of the process of learning via osmosis or absorption, which is organic (human), rather than through rote repetition, which is mechanical (robotic/AI). Computer scientist Patrick Juola noted, in a 2013 article in which he described how he used a computer program he created to flush out J. K. Rowling as the anonymous author of A Cuckoo’s Calling, that when we humans read, unlike AI, we only pick up on the gist or “general meaning” of a sentence. We do not normally memorize the books and various texts we read (unless we intentionally choose to). Even trained poets like myself do not all memorize poems by other poets (though this might be a good exercise). As artists, we are influenced by rather than wholly dependent on other artists whose works we admire. Generative AI, on the other hand, memorizes and then has to be forced, not always successfully (as a study published recently, on August 8, 2023, which found that ChatGPT regurgitated copyrighted text by J. K. Rowling, shows), to forget. Humans can also learn through repetition, but this is a process that is mostly used by humans whose professions are built around performance or re-creation rather than creation. Moreover, such humans will still bring to their performances their unique instincts, personalities, emotions, etc. That is what makes Hilary Hahn’s rendition of Prokofiev’s Violin Concerto No. 1 distinct from Joshua Bell’s. AI and its outputs are kaleidoscopic. Humans are fluid and prismatic: iridescent. 
  2. The fact that prompt engineers do not know what AI will output creates another distinction between them and the majority of serious artists. Sure, the art world has done itself a disservice by accepting that people who splash paint or even feces on a canvas or engage in random, often meaningless acts like those are creating art. However, I would point you to Adam Ruins Everything’s excellent analysis “How the Fine Art Market Is a Scam” in which Adam Conover and his crew expose how wealthy people (including, likely, many of those who stand to benefit from the transfer of traditional artists’ very limited wealth to the pockets of already grossly wealthy individuals) use the fine art market (galleries, auctions, and museum donations) to launder money. This scam culture that pervades the upper echelons of the art world, facilitated by wealthy people who view art as a cash cow and care little about actual art and artists, seems a precursor to this new form of theft from mostly lower- and middle-income artists—AI “art” vis-à-vis data laundering—that has come in the form of generative AI. 
  3. Finally, the argument that generative AI learns in the same way as human artists has an unspoken “corporations are people, too” reasoning behind it. It is laws like Citizens United that have enabled Big Tech’s moguls to have way too much influence in and/or power over our government, turning it not so slowly but surely into a plutocracy. The majority of profits from generative AI accrue to these same individuals. Many in the art community, including Molly Crabapple and the Center for Artistic Inquiry and Reporting, have called this “effectively the greatest art heist in history.” Comparing the scale of theft that AI companies like Stability AI not only engaged in and facilitated but also trained users to engage in and then continued to encourage to that of the rare individual who consciously or subconsciously copies another artist using traditional tools is beyond laughable. 

Note: This article was first published in August 2023. It was modified for accuracy on January 22, 2024. For example, after reading Julieta Caldas' excellent piece "Philanthropy Is a Scam," which describes how the wealthy use "donations" to nonprofits to further advance their interests, I decided it made sense to replace "corporatocracy" with "plutocracy."