THE TECH BEHIND THAT FAMOUS PIECE OF AI ART

Christie’s made history as the first auction house to sell a piece of artificial-intelligence art, “Edmond de Belamy,” and it’s making waves in the AI world.

At first glance, the painting looks like one of Claude Monet’s early attempts at portraits: half-realized, blurry and hauntingly sad. But with every closer step, it becomes clear that the piece looks a little more like a copycat: bare, white canvas fills parts of the frame, and the face, what little can be gleaned from the abstract, looks expressionless, practically robotic.

That’s because the piece, sold on Oct. 25 to an anonymous phone bidder at Christie’s auction house, was a copycat. Well, it was and it wasn’t: it was artificial intelligence. And it’s causing quite a stir.

“Edmond de Belamy, from La Famille de Belamy,” from the French art collective Obvious, was auctioned at Christie’s. It sold for just under half a million dollars, more than forty times the initial price estimate. The image was created by Obvious using a Generative Adversarial Network, or GAN, algorithm. It’s the result of 15,000 portraits painted between the 14th and 20th centuries being fed into the GAN by the trio of artists who make up Obvious, Hugo Caselles-Dupré, Gauthier Vernier and Pierre Fautrel.

In their write-up of the piece, Christie’s spoke glowingly of the artwork. According to a Christie’s spokesperson in a statement, “The exceptional price realized for Edmond de Belamy reinforces that this is a significant moment in time for the artworld [sic].”

The spokesperson made the point that the sale marked the auction house as the first to do business with artificial intelligence art.

Snagging that historic first is why they did it, according to Marian Mazzone, chair of the art and art history department at the College of Charleston, and a member of the Digital Humanities Research Laboratory at Rutgers University.

“This project sold at Christie’s was not very interesting; it was not very sophisticated,” said Mazzone. “It’s an interest in novelty on their part. Christie’s is able to now say forever, ‘We sold the first piece of AI art.’”

Mazzone’s criticism, and the reaction from many in the AI community, stems from the fact that the algorithm used isn’t original. At all.

“GAN is not new,” said Ahmed Elgammal, the founder and director of Rutgers’ Art and AI Lab, as well as a professor in the university’s computer science department. “It’s as old as AI itself.”

The original algorithm was created in 2014 by Ian Goodfellow, a researcher in machine learning, and has been frequently used since by Facebook.

Obvious used a modified GAN created by Robbie Barrat, a 19-year-old artist and student at Stanford University in California. A quick look at his Twitter shows that Barrat, who did not respond for comment, is doing similar things to Obvious: swirling, hazy landscapes and bulbous, unfamiliar nude bodies.

While Obvious thanked Goodfellow and Barrat in a statement, their actions have still rubbed some people the wrong way.

In simple terms, GAN is a two-sided operation, Generator and Discriminator. The generator creates random images and presents them to the discriminator. The discriminator will determine whether or not the image falls within the same parameters as the data it was fed.

“It has one layer that has access to the images, and the other layer is going to make the images and start from scratch,” he continued. “The second layer sends images to the first layer and asks, “Is it a dog or not? A cat or not?” and the first layer says yes or no until the second layer gets it right.”

Eventually, the machine could improve itself so much that the images it generates are indistinguishable from the source material. It’s sort of a Turing test for the AI age.

Beyond that, as Elgammal’s lab tested, the machine can organize paintings it’s given based on style.

“Trained to predict style, based only on noisy discrete style labels, without being given any notion of time, the machine encoded art history in a smooth chronology,” the lab wrote in a 2018 study, “The Shape of Art History in the Eyes of the Machine.”

The machine was even able decipher stylistic similarities between seemingly unrelated pieces of art that would be indecipherable to the human eye.

The “Edmond de Belamy” piece is the latest in a fictitious Belamy family tree. The algorithm Obvious used is an evolution of the Discriminator-Generator duo, which sought to see what type of image the GAN would create if it were fed a very specific diet.

Though eventually the GAN could create images identical to the human-created ones, Elgammal and Mazzone are more interested in understanding imperfection than creating perfection.

“Art is about generating novel things. If [a GAN creation] looks exactly the same, that’s not art; it’s replication,” said Elgammal. The art comes in the failure. “That’s where we get surprised.”

In his lab, Elgammal has been working on a variation of GAN called “Creative Adversarial Network,” or CAN. He believes it differs from GAN through the simple fact that “CAN is actually good art.”

“GAN is obvious that it’s machine-created art,” he said. To test their theory, Elgammal and his team put up two pieces of art, one CAN-created and one manmade, and tested if people could tell the difference. He said that 75 percent of the time, people believed the CAN art was manmade.

Elgammal’s CAN also forces the machine into a style break: to create something following general aesthetics that is also entirely different from the pre-existing styles of art.

“Style is established and an artist moves out of that style,” he said. “At some point, the artist gets bored and wants to make something new.”

By giving the machine those parameters, Elgammal is ensuring that what it creates is stylistically unique. To prevent meaningless garble, he included the “aesthetics” caveat. That’s because while styles go in and out, aesthetics – such as rules of composition or the concept of warm colors versus cool colors – remain common across artistic divides.

The AI art scene is still considered to be in its nascence. According to Elgammal, machine learning could evolve towards creating other types of art, though things like AI-generated novels might be quite a while away.

During the evolutionary meantime, it’s not clear how invested auction houses will continue to be.

The press statement ended with a Christie’s spokesperson writing, “It remains to be seen whether Christie’s will sell AI art in the future.”

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s