The Backwardness of Generative AI: Making Exploitation Cool Again

. . . parading people living with disabilities who are now able to do things they previously could not because they now use generative AI to complete those activities cannot absolve generative AI corporations of responsibility for exploiting artists to provide that technology.

The Backwardness of Generative AI: Making Exploitation Cool Again
Techno-Optimist Paradise (photograph by the author)
The Backwardness of Generative AI: Making Exploitation Cool Again

Telling ego-maniacal rulers/cultish leaders not to exploit others is a lost cause. They will always argue, like Marc Andreessen, Sam Altman, and Beff Jezos/Guillaume Verdon, some version of Manifest Destiny, The White Man’s Burden, divine right, and/or eugenics cloaked in modern-speak.

Likewise, telling pirates to stop pillaging is a lost cause. They will always argue some version of might makes right or survival of the fittest (such as the viral “Adapt or die!”): in short, a less-polished version of the corporate-speak of their enablers/overlords. The more educated among them, while essentially engaging in the same behavior, may opt for genteel language that belies their actions. For example, LinkedIn (which is owned by Microsoft and promotes ChatGPT heavily) once described using generative AI that was created by stealing from artists in the most innocuous-sounding way—as “skills diffusion.”

But here is a truth that those who use or celebrate exploitative technologies like ChatGPT, Dall-E, Midjourney, Gemini, Stable Diffusion, Riffusion, etc. gloss over, and it is a truth as old as time: Unpaid labor, though usually and as in this case coerced, is, by definition, cheap. The “cheapness” of that labor is not an argument in its favor, but (rather) the opposite. This seems obvious, but it’s also clear that many who knowingly use these exploitative technologies do not fully grasp this.

Here’s another truth: Big Tech’s argument that generative AI “democratizes” art creation by enabling more people who could not otherwise produce high-quality text, images, audio, or video to do so, Big Tech’s major defense of generative AI in the arts is, essentially, just repeatedly defining technology. By definition, technology is that which makes it easier to do something. If, like Beff Jezos/Guillaume Verdon, we accept this circular logic as a wholesale replacement for actual logic, then we are setting ourselves up for a world in which we can never question the merits, benefits, and harms, costs, and/or externalities of new technologies. We must simply, as accelerationists like Verdon want, go full steam ahead with all new technologies without the kind of safety standards and regulations that normal people expect in an actual democracy not governed by Wall Street and Silicon Valley.

Delving more deeply into examples of Big Tech putting this flawed, circular logic into practice . . . parading people living with disabilities who are now able to do things they previously could not because they now use generative AI to complete those activities cannot absolve generative AI corporations of responsibility for exploiting artists to provide that technology. If we overlook the massive costs borne by those who make up the backbone of the arts sector, we are also likely to overlook other major costs, including to the very people the technology purports to help. Based on “Inventions in Sound,” the touching and elegant video by poet Ray Antrobus and which features other people living with deafness, I do not think replacing human transcriptionists and captioners with generative AI will make life better for most people who are deaf or hard of hearing.

As the universe would have it, while finalizing this article, I came across a post by Dr. Timnit Gebru of a story by Paresh Dave of Wired magazine titled “Google Used a Black, Deaf Worker to Tout Its Diversity. Now She’s Suing for Discrimination.” In it, Google employee Jalon Hall, who is deaf, recounts how Google lured her to work there under the false promise that it would provide her with the reasonable accommodations that are required under the Americans with Disabilities Act. Once she was employed by Google, the company used photos of her on social media to promote itself as being “inclusive” and claimed it was “helping expand opportunities for Black Deaf professionals.” Hall has a very different story to tell.

According to the article, “Hall accuses Google of subjecting her to both racism and audism, prejudice against the deaf or hard of hearing. She says the company denied her access to a sign language interpreter and slow-walked upgrades to essential tools.” Crucially, Google does not deny the allegations in her lawsuit and instead argues that Hall’s case should be dismissed “on procedural grounds, including bringing the claims too late.”

Hall’s story is compelling because of her experience as a person living with a disability working for a company that is possibly the most well known in the world and that has been, nonetheless, persistently two-faced in how it presents its commitment to the rights of people living with disabilities vs. how it actually treats people living with disabilities. What is also striking is the extent to which the way Google treated Hall appears to be mirrored in its technologies that are designed to assist deaf people. Hall discovered that “the AI transcriptions in the software YouTube built for moderators were poor quality,” making it impossible for her to complete her work and meet her quota. Managers informed her the problem might not be fixed for “years.” Finally, also noteworthy is the extent to which Hall’s experience as an employee using these AI technologies and her conclusions upon researching them reflect concerns raised by Antrobus and other people living with deafness who shared their stories in “Inventions in Sound.” Dave writes, "[Hall’s] research showed that Black, Deaf users are concerned about the potential for AI systems to misinterpret signs, generate poor captions, take jobs from interpreters, and disadvantage individuals who opt for manual interpretation" (2024).

There is an inherent hypocrisy in oppressing an already exploited and marginalized group—artists—under the guise of uplifting another oppressed and marginalized group—people living with disabilities. Note, also, the double exploitation of artists with disabilities whose works are used to train generative AI without their consent and without crediting and compensating them. One might even go further and say that, like imperialists and colonizers claiming divine favor, those who do this are disingenuous, deceptive, deeply fake. Yet this is the tech sector’s playbook.

In addition to people living with disabilities, Big Tech likes to highlight Black generative AI users as more evidence of generative AI’s “democratizing” effect . . . despite participating in the age-old tradition of ripping off Black artists (most notably, musicians) in order to make such "democratization" possible. I do not think it is a coincidence that generative AI companies and AI music hawkers first targeted Black artists like Drake, Travis Scott, Ice Spice, Beyoncé, Rihanna, The Weeknd, and Kanye West for deepfakes. One duo of AI bros was so brazen as to build an entire company around Drake called No joke. With no shame and as obnoxiously as one would expect from anyone who would do this kind of thing, these two white or white-passing men felt justified in monetizing a human being . . . who happens to be Black . . . without his consent and without compensating him. Sadder still is the fact that other people actually used this tech.

This is where we are in the year 2024:

  • We have an entire industry that is dedicated to exploiting artists—whom some clearly need to be reminded are human beings—as unpaid, coerced labor;
  • the government is doing little to remedy the situation; and
  • the excuse from/rationale of some knowledgeable users of exploitative generative AI who promote it to others (the excuse of certain music marketers, for example) is that they are using it while “waiting for the courts to decide” on this basic, obvious issue.

Any of this sound familiar?

As profits accrue to Big Tech’s rulers and financiers, historic patterns of exploitation are being replayed. The generative AI industry continues to follow in the footsteps of white male rulers of old who made their fortunes through:

Big Tech leaders’ actions are also reminiscent of Gregor MacGregor, the 19th Century con man who sold investors on the idea of a fake country called Poyais (and even, like today’s crypto-turned-AI bros, went so far as to invent fake money to lend credibility to and raise real money for his scheme). Will we allow Big Tech's leaders, as both the Scottish and French governments allowed MacGregor, to get away with massive theft, proving that it is OK to steal, even from the poor or middle class, if you are rich? Even if your wealth was built on and is dependent on unpaid, coerced labor?

The tech-drug lords keep peddling the two-faced lie that they have the best interests of all of humanity at heart and that their efforts and the “mandatory” sacrifice of artists are in service of a better world for the many and not the few. And some of us keep pretending colonizers should be trusted.