Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Street
The Street
Ian Krietzberg

Human creativity persists in the era of generative AI

Before she graduated high school, Michele Rosenthal knew that she wanted to dedicate her life to art. 

She had always wanted to be a professional illustrator. When it came time to apply to colleges, she decided to attend art school, where she majored in illustration before becoming a professional freelance artist.

"What I really love is art," she said in an interview with TheStreet.  

But over the last year — in the wake of the launch of ChatGPT, Stability AI and the like — the business of art has become more difficult. 

Related: AI trends: What experts, execs think artificial intelligence will look like in 2024

Industries that before paid illustrators for their work have increasingly turned to artificial intelligence image generators. The jobs, Rosenthal said, have begun to dry up.

Rosenthal's initial response to AI image generation was that there is no real comparison between an artist and an algorithm. A professional artist, she said, brings far more to a job than an output. What she didn't expect was how quickly so many people — with such haste — would choose to adopt AI tools as a means of replacing flesh-and-blood humans. 

"The shockwaves are real," she said. "Artists are definitely feeling the results of everyone adopting this technology so quickly." 

Still, Rosenthal doesn't think the current bump in the road of the professional artist will be a permanent reality. If people want something new, she said, they still have to turn to a human artist; AI models produce a mere amalgamation of their training data, they don't seem truly creative. 

The ethics of AI

Artificial intelligence is not a new technology. But with the launch of ChatGPT last year, it became consumer-facing in a whole new way, touching off an AI arms race as startups and tech giants alike raced to be the first to ship products. 

That race has been mirrored in courtrooms and government buildings as regulators have struggled to understand the technology so that they can figure out how best to regulate it. And a debate, meanwhile, has been ongoing within the field about the harms, risks and benefits of AI. 

A piece titled 'Triple Self Portrait' by Michele Rosenthal. 

Courtsey of Michele Rosenthal

Among these ethical concerns — which include algorithmic discrimination, misinformation, political instability, data privacy and more — are questions of copyright infringement and fears of job loss. 

Over the past few months, several lawsuits have been filed by groups of authors and artists against the companies behind these models. A class action lawsuit, filed in September by the Authors Guild, alleges that Microsoft (MSFT) -) and OpenAI, without providing notice, credit or compensation, use copywritten work as the underpinning of their models. 

"These algorithms are at the heart of Defendants' massive commercial enterprise," the lawsuit alleges. "And at the heart of these algorithms is systematic theft on a mass scale."

OpenAI did not respond to a request for comment on this piece. 

Another case, brought by a group of visual artists against Stability AI, Midjourney and other companies, argues a similar premise: "AI image products are primarily valued as copyright-laundering devices, promising customers the benefits of art without the costs of artists."

The companies involved have argued that the content they train their models on is "fair use," a claim that could, if true, supersede copyright violations. The U.S. Copyright Office said in August that it had undertaken a study of copyright law as it relates to generative AI to determine how best to treat the sector. 

"Training generative AI models in this way is, to me, wrong," Ed Newton-Rex, a composer and technologist, wrote in a Nov. 15 op-ed. "Companies worth billions of dollars are, without permission, training generative AI models on creators’ works, which are then being used to create new content that in many cases can compete with the original works."

Newton-Rex in November resigned from his position leading Stability AI's Audio team, based, he said, on a disagreement over the company's impression that copyrighted work is fair use. 

Saying that the act of training on copyrighted work amounts to the exploitation of creators, Newton-Rex questioned the model as an unjust disruptor of existing copyright practices in the arts. He has instead supported generative models that are trained only on licensed content and are further transparent about the content in their training sets. 

As the legality of the issue is battled out in court, with the potential result of further enforcing existing laws against copyright infringement in the U.S., there has been a growing point of contention between creatives and those who have embraced AI content and image generators over the myriad nuances of creativity, and the differences between human and algorithmic creation. 

TheStreet spoke with a series of experts, including AI researchers, psychologists who specialize in the study of human creativity and the artists themselves about the differences between algorithmic output and human creation, the importance of the creative process, the impact of human creativity and the important differences between art as a hobby and art as a profession, and the ways in which AI can impact both. 

Related: The ethics of artificial intelligence: A path toward responsible AI

How LLMs 'create'

Large Language Models (LLMs) — like ChatGPT — are, at their core, predictive models. Trained on an enormous quantity of data (the first "L" in "LLM") that includes books, newspaper articles, social media content, photos and videos, the output of any generative AI model is tied directly to its input, something that cannot quite be said for humans.  

The Atlantic reported in September that a dataset called "Books3" was used to train models by Meta, Bloomberg and others. 

The dataset included more than 191,000 books, the basis of a July lawsuit filed against Meta by writers Sarah Silverman, Richard Kadrey and Christopher Golden.

Generative text models, according to AI researcher Dr. John Licato, essentially output a "probability distribution that's defined over a set of possible tokens." 

"There's a very specific mathematical function that it's trying to optimize when it's doing that training process," he told TheStreet. "It's trying to make it so that the probability distribution best reflects the training data."

Though Licato noted some similarities between the creative processes of humans and AI models, he said that there is an "inherent limit" to what LLMs can actually output. 

Stability diffusion and high-quality datasets

When it comes to diffusion image generation — a method employed by Stability AI — models learn the base construction of an image by removing the noise from images in a given dataset so they can produce variations of each image. 

The artist lawsuit against Stability and its peers notes that "the primary objective of a diffusion model is to reconstruct copies of its training images with maximum accuracy and fidelity.”

The complaint goes on to cite a research scientist at Google DeepMind, Nicholas Carlini, who in a 2023 research paper stated that "diffusion models are explicitly trained to reconstruct the training set." 

DeepFloyd IF, an image generator launched in April by Stability AI, was trained on a high-quality dataset that contained one billion image and text pairs. 

Stability AI did not respond to TheStreet's request for comment on this piece. 

A low-quality dataset will produce low-quality output. 

A piece titled 'Autumn' by Michele Rosenthal.

Courtesy of Michele Rosenthal

Likewise, a biased dataset will produce a biased output. 

And without datasets at all, such models would be unable to produce anything. 

AI expert and Ivanti CPO Dr. Srinivas Mukkamala told TheStreet that LLMs cannot generalize any more than they can feel. Such models can only 'create' based exclusively on the information in their training set. 

"It doesn't have senses. It doesn't have human empathy behind it," Mukkamala said, noting that humans, on the other hand, can both feel and express. "As a technologist, LLMs are very limited, from the dataset they've been fed with, to the human that's behind it and what you're asking it to answer." 

Related: Artificial Intelligence is a sustainability nightmare — but it doesn't have to be

How humans create

Humans, on the other hand, are, as Rosenthal said, much more than a mere algorithm. 

The technical process of human creativity, according to artist and Duke assistant professor of psychology and neuroscience Dr. Paul Seli, is one of generation and evaluation. 

The working hypothesis, he told TheStreet, is that the brain's default mode network — which is active during passive rest — is at work during the generation of creative ideas. 

The brain's prefrontal regions are then associated with the process of determining whether those ideas are any good.

A recent study conducted by Dr. Ariana Anderson, assistant professor of psychiatry and biobehavioral sciences at UCLA, found highly creative people had more randomness in their brain activity than simply intelligent people.

AI, she told TheStreet in an interview, might be considered "smart" due to the size of the datasets the models are trained on. But AI can't "extrapolate beyond what it's been trained, whereas human brains can."

The random turnings inherent to highly creative brains allow such people, humans, to find and explore new creative areas they "wouldn't have found if they were pre-programmed." 

Dr. Roger Beaty, creativity expert and assistant professor of psychology at PennState, told TheStreet that, while AI output might sometimes seem novel or creative, it's "not really a fair comparison because of the vastness of knowledge that the AI has access to."

Indeed, current LLMs, according to Meta's chief AI scientist Yann LeCun, are trained on text data that he said would take a human "20,000 years to read." 

'Gaussian Noise, Human Hands,' by Eryk Salvaggio, was a series of art created using an exploit in Stable Diffusion that resulted in the model creating an abstract image. The result is a piece of art that "barely" touches the training data, instead producing "strange forms." 

Courtesy of Eryk Salvaggio

Regardless, Beaty said that AI models "don't have experiential episodic memory, which is also kind of relevant to creativity when you think about drawing on personal experiences for inspiration."

The source of that personal inspiration, according to Seli, more than just being fueled by personal experiences and emotions, comes from a deeply human attempt to "take our subjective experience and share it with other people." 

Human inspiration, Seli said, goes far beyond a 'training set' of previously consumed art, literature or music. 

"One of the primary purposes of art is to serve as a communicative element of our inner experience," Seli said. "We can share emotions in ways that words can't capture." 

Why humans create

How humans get inspired and create represents an important distinction between the strictly mathematical process that AI algorithms utilize. 

But the answer to the question of why humans create is perhaps more important. 

Seli conducted a study in July where participants were asked to grade how creative they found a given set of artworks. The entire set of artwork featured exclusively AI-generated art, though the labels each participant saw — which categorized a work as either human or machine-made — were randomized. 

The study concluded that people found allegedly human-produced art to be more creative than AI art. 

"If we think that a human did it, then we assume that the art is communicating some experience, and then we like it because we can relate to it or we can gain some sort of empathic insight into what it's like to be that person," Seli said. "But with AI, we don't have the same empathic experience."

Seli noted an inherent emptiness to the communicative abilities of AI art, a sentiment that was echoed by new media artist Eryk Salvaggio

A viewer can look at an AI-generated image and be moved by it, but regardless of the viewer's impression, "there's nothing on the other side of that communication. There's no intention in that communication. There's no meaning in that communication."

And it is that need to communicate that, according to Seli, remains at the very core of what art is and why humans are so drawn to it. It all comes down to a powerful human urge to take isolated experiences and establish a wider level of connection. 

"The more welcome we feel in this existence, the more understood. And I think that's the point," Salvaggio said. 

A piece titled 'Tea Break' by Michele Rosenthal. 

Courtesy of Michele Rosenthal

The creation of art, even on the professional level, holds true to that base value of a need for raw, human expression. 

The dataset that artists work with, Rosenthal said, goes far beyond reference shots, encompassing instead "everything we've ever experienced and every emotion that we've ever had." 

Rosenthal said that at the core of all human art is a powerful urge to symbolically express some element of the artist's human experience. 

A certain brushstroke, or a particular passage from a poem, or, Seli said, a musical melody, can elicit a strong emotional response from a varied audience. That response is the result of intimate human connection. 

The act of creating art, for David Depasquale, a professional artist who has worked as an art director and character designer in television, has been an avenue for personal expression and escapism throughout his life. 

Pursuing it professionally was a "need."

"It was the only place that I found where my mind just was quiet for a little while," he told TheStreet.

The importance of that process and that personal expression is on display in his professional work as much as it is in his personal work.

None of Depasquale's work comes from a cold place of pure output, a common misconception about an artist's approach when working professionally.

"I get hired as an artist personally for my sensibilities, for my ideas, for my experiences in life and where I've come from," Depasquale said. "Every single character I've designed is from a very personal place."

Good artists, he said, are not just hired for their output, they are hired for the unique humanity, process and soul that they bring to the job of creation, which is why he's not concerned about AI taking his job. 

"You have to draw from something personally to be able to communicate something that someone else is going to recognize and translate," Rosenthal said, noting that the process of exploring artistic inspiration revolves around answering certain questions of "what is the image in my mind that I want to make?" and "what is the feeling that I want it to evoke?" 

This need for symbolic emotional expression — a passion that drove both Rosenthal and Depasquale to pursue art professionally — points to the more philosophical nature of art itself: the vibrant soul ensconced within a piece of art, and the vitality of an often laborious artistic process. 

Related: The Cloud is vulnerable. Is Chain Reaction the solution?

The soul of art

Art, according to Depasquale, is far more than just the final product. 

And artists are far more than their output. 

"You don't play the piece of music you're playing just to get to the end. You play it for the joy of playing the whole thing," Depasquale told TheStreet. "The reality is that the whole process is the art and that's what AI removes." 

A growing obsession with "efficiency and instant gratification" has, according to Depasquale, fueled the recent proliferation of artificially generated art, something that is merely the latest evolution in the ongoing and damaging commoditization of art itself. 

"You hear a piece of AI-generated music and it might sound nice on the surface, but there's nothing behind it," he said. "It's empty. It's an empty door; you can't go through it."

Public conversation that focuses on the economics of art, rather than its inherent emotionality, has further enhanced this perspective. 

Still, Depasquale does not hate AI, nor does he think AI will take away his livelihood. AI, he said, is a machine and a tool. 

It "has no emotions." 

The real issue Depasquale has with AI is the "humans behind it, and how they've chosen to wield this tool." 

"I'm more afraid of the devaluation of art than I am of a machine taking my job," he said. 

The value of human creativity 

A world without art and creativity, Seli said, "would be a hell of a lonely place for us." 

Rosenthal said that this point represents the "secret weapon" of art itself, something that "AI will never be able to produce." 

"Art making is about human connection and really good art can create more empathy," she said. "The experience of connecting with someone else's experience; that's really powerful. That's what makes art so valuable."

"I hope that doesn't get lost."

But in a more macro sense, human creativity — stretching beyond the arts — improves human society, according to Dr. Anderson. Every technological revolution human society has experienced, Anderson said, was incited by highly creative people. 

"Technology can be very beneficial when it is developed by creative people because it can really alleviate human suffering around the world," she said. "But we should always remember that the creative people are the ones who should be guiding it."

And if human society begins to over-rely on AI as a means of replacing artists and art with algorithmically derived, mathematical amalgamations of the art it was trained on, the result, according to Depasquale, could be the downfall of civilization. 

"It sounds hyperbolic. And I kind of mean it to be, but I think that is what destroys human civilization," he said. "The arts and creativity is what kept humans going since the dawn of time. Creative people are the people who are the keepers of culture and the keepers of human civilization."

When there is nothing else left, Depasquale said, there is still art, which can be made with a stick in the mud as much as it can be made on a computer or with a pen and paper. 

The history of humanity can be traced through art and the stories that art has told. This transcends language and modernity, stretching back to the time of the Neanderthal and their cave paintings that today inform historians how they lived.

"People always have this instinct to create. And I think removing that or trying to remove that from humanity will destroy humanity," he said. "Human culture is exhibited through art; you can't remove that from humanity. That's like taking out somebody's heart. They can't survive without that."

Art represents the essence and importance of boundless human communication, communication not fettered by the constructs of language or time. And the essence, or soul, of that, Depasquale said, cannot be artificially produced. 

"No matter how minutely detailed AI can get, you can never manufacture that, you can't manufacture vision through an artist's eyes or hands or whatever they're using to create," he said. "That's the end of civilization."

Related: Think tank director warns of the danger around 'non-democratic tech leaders deciding the future'

Intentionality in co-creativity

Despite its already-established ability and ongoing potential to wreak havoc and harm across the professional art industry, some are looking at AI as a tool that, with the right intentions, can be used to help enhance human creativity, rather than limit it. 

Seli noted preliminary findings in which the artists themselves seem to be better at using AI image generators than non-artists. 

"It's a tool. It doesn't level the playing field in some equality of outcome way with art," he said. "It's just new tools, and you can use them or not, but if you're better at painting a piece you seem to be better at using DALL-E."

Another entry in Eryk Salvaggio's 'Gaussian Noise, Human Hands" series. 

Courtesy of Eryk Salvaggio

Though he is sure that people will take advantage of these models, Seli found that it can be useful on occasion as a sounding board, an initial step in his own process of artistic inspiration and creation. 

For Salvaggio, who recently taught a course on AI art at Bradley University, generative AI represents an enormous tool with countless creative possibilities. 

Art, he told TheStreet, is not the commercial pursuit of image creation. There is much more nuance to the field, and to the kinds of creativity AI can enable and enhance, and the ways in which AI can limit creativity. 

In his artistic work with AI, Salvaggio is always attempting to figure out ways in which he can essentially trick an AI model into making something new.

Much of the work he's done with AI has been through a lens of "creative misuse." 

The reason for this approach is that Salvaggio doesn't trust the datasets that power these models, and so doesn't want to rely on them when using them to create a piece of art that is unique to him. 

"Most artists don't want to create something that is so literally in dialogue with things that they don't know," he said. "If I want to make work that is in dialogue with other artists, I want to choose how that is represented and I want to choose what that conversation is."

In the act of prompting an AI model, Salvaggio doesn't have much say in what image the model spits out. There is no dialogue with other artists, there is no credit to other artists and there is no ability to extend a story begun by other artists. There is only the generation of an abstraction of other work. 

Such models also pose an additional challenge to creatives who are attempting to break new ground; no artist, Salvaggio said, wants to put something out there that "runs the risk of being something that someone else has done." 

Prompting alone is not enough. 

And relying on prompting is creatively limiting — a user might get outputs that are almost exactly what they asked for, something that jeopardizes the relationship between artist and art through something Salvaggio called "curational creativity." 

A still from 'Sarah Palin Forever,' which uses AI images and deepfake voice technology to tell the story of a 17-year-old girl who has spent her entire life inside a looping Sarah Palin rally in Bangor, Maine. The film was a way for Salvaggio to explore the unsavory side of AI and political manipulation, while telling a clearly fictional science-fiction story. 

Courtesy of Eryk Salvaggio

The central component of his creative process is "screwing up, making the thing you didn't want to make and the learning that comes from that." 

"There's a constant flow between myself and the work that I'm making where I am putting something in to the best of my capacities, seeing the limits of those capacities, and then calibrating to what's on the page," he said. "There's almost a dialogue with the ideas that are in my head and the ideas that are being expressed."

The challenges, posed by AI, that exist within Salvaggio's new media space are the same faced by many traditional artists. 

The limits of a person's artistic capabilities, Salvaggio said, are usually defined in terms of commercial viability, which is a violation of the artistic process. 

The artistic process is about "our relationship to making" and "finding a way to express ourselves within our own minds," he said. The idea that AI image generators democratize art suggests that art isn't inherently democratic; it "isn't a problem with art. It's a problem with the market."

As such, Salvaggio's biggest concern is not about AI, something he engages with as a tool, but with the profession of art itself. 

Creativity, he said, will always exist, a sentiment that was echoed almost verbatim by everyone TheStreet spoke with for this piece: AI will not and cannot stop human artistic creation. 

The real question at hand is the industry of professional art, from illustrators to writers and actors, and issues of exploitation and job loss. 

"Humans are naturally creative. We've been making art since the beginnings of human history and we're not going to stop anytime soon," Rosenthal said. "I think it's a shame that the people who are hiring artists seem to have lost respect for artists, seem to think that artists could be replaced by something that is just an algorithm."

Contact Ian with AI stories via email, ian.krietzberg@thearenagroup.net, or Signal 732-804-1223.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.