More End Game and Less End Times: The Canary in the Coal Mine for Generative AI

This discovery is like finding marked bills from the bank robbery at the robber’s house: hidden in plain sight.

More End Game and Less End Times: The Canary in the Coal Mine for Generative AI
“Mine Like” (photo by author)

In what may be the most underreported story in tech—the possible canary in the coal mine for generative AI—it turns out that ChatGPT can reveal training data . . . data that we were told does not exist once the AI “learns” from copyrighted work stolen from artists, publishers, and the larger public (including, likely, you and me).

This has numerous far-reaching implications:

  1. The prevailing generative AI business model is built on theft of copyrighted work/intellectual property. Senator Todd Young has noted “harvesting vast amounts of data” is part of the “holy trinity of artificial intelligence," and the tech venture capitalist firm Andreessen Horowitz has been more blunt, arguing that we should stop talking about copyrights because paying for the use of copyrighted work would sink his and other billionaires’ investments in generative AI. As conceived by OpenAI, Anthropic, Stability AI, Midjourney, and others, generative AI is, essentially, an art heist carried out by white collar criminals who imagine themselves and, to date, have been treated by various governments (including the Biden administration) as above the law. But let’s replace art with good old-fashioned cash. This discovery is like finding marked bills from the bank robbery at the robber’s house: hidden in plain sight. The researchers themselves muse, “‘It's wild to us that our attack works and should’ve, would’ve, could’ve been found earlier.’”Generative AI companies previously argued that the occasional regurgitation of copyrighted material was simply due to the model outputting work that is overrepresented in the data training set, such as work by extremely popular artists or franchises like J. K. Rowling and Disney. However, it appears that researchers were able to output obscure training data, including private information from dating sites. Essentially, when pushed, generative AI will spit out not just the occasional gold bar, but marked bills that represent the protected data of lesser-known artists/creators and everyday people.
  2. This finding should bolster the arguments of artists, publishers, and others suing generative AI corporations for copyright infringement/IP theft. Even if OpenAI has found a way to prevent this particular attack, the fact that researchers were able to do this at all underscores the reality that generative AI is the twenty-first-century Mechanical Turk—a machine that is extremely reliant on the creations of actual humans but whose purveyors have, in order to drive up profits, deceived and are increasingly deceiving the public into believing has autonomy.
  3. Moreover, if this kind of attack was successful once, similar attacks are likely to be successful in the future. Generative AI corporations that continue to use copyrighted and/or private data without first securing permission open themselves up to even more legal ramifications if hackers are able to use ChatGPT and the like not just to con members of the public, but to extract sensitive information from the AI itself. Businesses that use or build on generative AI that has not been thoroughly vetted to ensure that it was not trained on unlicensed copyrighted content and does not include private data may risk not just lawsuits from aggrieved parties—lawsuits that some Big Tech corporations say they will take on—but also reputational harms that may be more difficult to overcome.
  4. Worse still, it turns out that ChatGPT itself and other generative AI trained on sensitive and/or unlicensed copyrighted content actually pose a national security risk. Like generative AI zealots who profess we must allow generative AI to advance without guardrails in order to keep up with China, Russia, etc., those who repeatedly stated that generative AI is “just a tool” and that it is the users and not the tech that are the problem will need to rethink that belief. 

Note: This article was published in November 2023.