I’m a multidisciplinary artist. I’m also exhausted fighting for what should be obvious—no one should be able to exploit my hard work without my consent and without crediting and compensating me fairly. No worker should be uncompensated for their labor. Please let that sink in.
It is precisely because I dare to hold this fundamental truth—a truth I believed was etched into our collective national conscience—that I fail to understand why commonsense measures to ensure this does not happen continue to be held up.
When I first read the summary of President Biden’s executive order on regulating artificial intelligence, I was deeply disappointed. There was no mention of protecting copyrights—the foundation upon which artists earn a living. Since then, I’ve read more of the actual act and I am more soberly disappointed. Artists and other copyright owners will, essentially, have to wait many more months for anything approaching closure on this issue. For many, this will take a financial and emotional toll. Essentially, instead of taking obvious commonsense preventative measures to protect artists’ copyrights and despite the many hearings and feedback sessions that have already transpired over the past year, we are getting more the same: delay. One wonders whether this is a necessity or a strategy.
In the 270 days the White House has allotted to yet more study, due to the speed with which AI is being developed and proliferated, artists will likely lose more ground in the battle to protect our rights as use of generative AI is forced on the masses through tech corporations that own both the hardware and software that make our machines run and is, thereby, normalized. Will the American public, which has supported the rights of artists thus far, continue to do so as it becomes increasingly complicit in stealing from these same artists?
A new argument against regulating generative AI is that this will stifle competition from AI startups. However, I do not understand why the interests of tech startups, risky businesses that have only recently come into existence, are being given more weight in decisions about regulation than the interests of artists who have contributed, for decades, to America’s prosperity. Like startups, these artists are entrepreneurs. Unlike most startups, these artists have weathered many storms and made it this far precisely because they are excellent at what they do. It is these artists’ hard-earned skill—excellence that rarely pays well—that, paradoxically, makes them prey for those who care more about profiteering than about art, which, incidentally, is made by actual artists and not by prompt engineers. I do not understand why so-called “AI talent” is now perceived as more valuable than the talents of artists without whom generative AI would not exist or function. We seem to live in a society that worships the fruit and casts away the seed. We seem to prefer seedlessness.
Failing to protect the copyrights of more traditional independent artists will kill rather than promote innovation as serious artists who dedicated their lives to creativity, often at great personal cost and sacrifice, will find creative ways to avoid supporting the AI industrial complex (including engaging in data strikes or data poisoning or simply releasing only subpar work), will be forced to abandon the arts as a career or to work for large corporations that are the only entities able to offer compensation and minimal protections (thereby squashing the small businesses that are independent artists in order to lift up speculative AI startups), or will relocate to countries with better protections for independent artists.
Though there are individuals who use generative AI as a tool, the vast majority use it as a machine—they enter prompts and expect the machine to provide multiple, satisfactory results within seconds. They think or pretend this is magic and ignore or deny the vast amount of human contribution (stolen labor) that makes this possible. Generative AI regulation should reflect this reality of how the vast majority of people, who do not have the skills to make meaningful edits to images, audio, and video, use it. Despite this, our government, which also benefits from the AI industrial complex through (a) data it collects about individuals (without pesky warrants) and (b) kickbacks from tech corporations, continues to focus on intervention rather than prevention, ignoring rather than preventing the crime of data laundering and never adequately compensating the victims of this crime.
A preventative approach would do away with the black box business model and compel AI companies to license data, which is not only doable but has been done (for example, Getty Images has long been in the business of licensing photography, record labels regularly license their artists’ music for use in other media, OpenAI is now licensing text from the Associated Press, and Google is now licensing music from Universal Music Group). Smaller AI startups could license data from other small businesses, such as independent music labels and independent book publishers. A robust licensing market would eliminate the problem of bad data; maintain and even improve the quality of content online; address concerns about bias, non-consensual porn, and child safety; ensure that many more of those who contribute to generative AI’s usefulness and success are fairly compensated; and incentivize more traditional artists to continue to share high-quality, authentic, and innovative text, audio, visuals, and videos. In the absence of preventative regulation that protects artists’ rights, including our copyrights, we can expect the opposite outcomes.
In the meantime and until our copyrights are protected, I encourage artists to consider that we have far more power than tech corporations would have us believe. In essence, generative AI companies like OpenAI, Midjourney, and Stable Diffusion and their customers are dependent on us. We can do whatever we like with our art, regardless of whether some, like James Broughel of the Competitive Enterprise Institute, consider our stolen work their “proprietary AI data.”
Note: This article was published in October 2023.