When the line between machine and artist becomes blurred

When the line between machine and artist becomes blurred
Mario Klingemann’s ‘Neural Glitch Portrait 153552770’ was created using a generative adversarial network. Credit: Mario Klingemann, Author provided

With AI becoming incorporated into more aspects of our daily lives, from writing to driving, it's only natural that artists would also start to experiment with artificial intelligence.

In fact, Christie's will be selling its first piece of AI art later this month – a blurred face titled "Portrait of Edmond Belamy."

The piece being sold at Christie's is part of a new wave of AI art created via machine learning. Paris-based artists Hugo Caselles-Dupré, Pierre Fautrel and Gauthier Vernier fed thousands of portraits into an , "teaching" it the aesthetics of past examples of portraiture. The algorithm then created "Portrait of Edmond Belamy."

The painting is "not the product of a human mind," Christie's noted in its preview. "It was created by , an algorithm defined by [an] algebraic formula."

If artificial intelligence is used to create , can the final product really be thought of as art? Should there be a threshold of influence over the final product that an artist needs to wield?

As the director of the Art & AI lab at Rutgers University, I've been wrestling with these questions – specifically, the point at which the artist should cede credit to the machine.

The machines enroll in art class

Over the last 50 years, several artists have written computer programs to generate art – what I call "algorithmic art." It requires the artist to write detailed code with an actual visual outcome in mind.

When the line between machine and artist becomes blurred
When creating AI art, the artist’s hand is involved in the selection of input images, tweaking the algorithm and then choosing from those that have been generated. Credit: Ahmed Elgammal, Author provided

One the earliest practitioners of this form is Harold Cohen, who wrote the program AARON to produce drawings that followed a set of rules Cohen had created.

But the AI art that has emerged over the past couple of years incorporates machine learning technology.

Artists create algorithms not to follow a set of rules, but to "learn" a specific aesthetic by analyzing thousands of images. The algorithm then tries to generate new images in adherence to the aesthetics it has learned.

To begin, the artist chooses a collection of images to feed the algorithm, a step I call "pre-curation."

For the purpose of this example, let's say the artist chooses traditional portraits from the past 500 years.

Most of the AI artworks that have emerged over the past few years have used a class of algorithms called "generative adversarial networks." First introduced by computer scientist Ian Goodfellow in 2014, these algorithms are called "adversarial" because there are two sides to them: One generates random images; the other has been taught, via the input, how to judge these images and deem which best align with the input.

So the portraits from the past 500 years are fed into a generative AI algorithm that tries to imitate these inputs. The algorithms then come back with a range of output images, and the artist must sift through them and select those he or she wishes to use, a step I call "post-curation."

So there is an element of creativity: The artist is very involved in pre- and post-curation. The artist might also tweak the algorithm as needed to generate the desired outputs.

When the line between machine and artist becomes blurred
When fed portraits from the last five centuries, an AI generative model can spit out deformed faces. Credit: Ahmed Elgammal, Author provided

Serendipity or malfunction?

The generative algorithm can produce images that surprise even the artist presiding over the process.

For example, a generative adversarial network being fed portraits could end up producing a series of deformed faces.

What should we make of this?

Psychologist Daniel E. Berlyne has studied the psychology of aesthetics for several decades. He found that novelty, surprise, complexity, ambiguity and eccentricity tend to be the most powerful stimuli in works of art.

The generated portraits from the generative adversarial network – with all of the deformed faces – are certainly novel, surprising and bizarre.

They also evoke British figurative painter Francis Bacon's famous deformed portraits, such as "Three Studies for a Portrait of Henrietta Moraes."

But there's something missing in the deformed, machine-made faces: intent.

When the line between machine and artist becomes blurred
‘Three Studies for the Portrait of Henrietta Moraes,’ Francis Bacon, 1963. Credit: MoMA

While it was Bacon's intent to make his faces deformed, the deformed faces we see in the example of AI art aren't necessarily the goal of the artist nor the machine. What we are looking at are instances in which the machine has failed to properly imitate a human face, and has instead spit out some surprising deformities.

Yet this is exactly the sort of image that Christie's is auctioning.

A form of conceptual art

Does this outcome really indicate a lack of intent?

I would argue that the intent lies in the process, even if it doesn't appear in the final image.

For example, to create "The Fall of the House of Usher," artist Anna Ridler took stills from a 1929 film version of the Edgar Allen Poe short story "The Fall of the House of Usher." She made ink drawings from the still frames and fed them into a generative model, which produced a series of new images that she then arranged into a short film.

Another example is Mario Klingemann's "The Butcher's Son," a nude that was generated by feeding the algorithm images of stick figures and images of pornography.

I use these two examples to show how artists can really play with these AI tools in any number of ways. While the final images might have surprised the artists, they didn't come out of nowhere: There was a process behind them, and there was certainly an element of intent.

When the line between machine and artist becomes blurred
On the left: A still from ‘The Fall of the House of Usher’ by Anna Ridler. On the right: ‘The Butcher’s Son’ by Mario Klingemann.

Nonetheless, many are skeptical of AI art. Pulitzer Prize-winning art critic Jerry Saltz has said he finds the art produced by AI artist boring and dull, including "The Butcher's Son."

Perhaps they're correct in some cases. In the deformed portraits, for example, you could argue that the resulting images aren't all that interesting: They're really just imitations – with a twist – of pre-curated inputs.

But it's not just about the final image. It's a about the creative process – one that involves an and a machine collaborating to explore new visual forms in revolutionary ways.

For this reason, I have no doubt that this is conceptual art, a form that dates back to the 1960s, in which the idea behind the work and the process is more important than the outcome.

As for "The Butcher's Son," one of the pieces Saltz derided as boring?

It recently won the Lumen Prize, a prize dedicated for art created with technology.

As much as some critics might decry the trend, it seems that AI art is here to stay.

Provided by The Conversation

This article is republished from The Conversation under a Creative Commons license. Read the original article.The Conversation

Citation: When the line between machine and artist becomes blurred (2018, October 16) retrieved 19 March 2024 from https://techxplore.com/news/2018-10-line-machine-artist-blurred.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.

Explore further

Artwork by an algorithm is up for auction, so does that mean AI is now creative?

 shares

Feedback to editors