July 1, 2020 feature
An AI painter that creates portraits based on the traits of human subjects
Over the past decade or so, researchers have been developing increasingly advanced artificial intelligence (AI) systems for a wide range of applications. This includes computational techniques that can interact with humans, analyze large quantities of data, identify the most salient parts of texts and much more.
In recent years, some research teams have also been exploring the potential of AI in artistic disciplines, developing systems that can automatically create paintings, poetry, music or other art works. A team of researchers from iViz Lab at Simon Fraser University (SFU) in Vancouver, Canada, has recently developed a creative system that paints portraits of humans conveying the subject's personality traits and emotions.
This unique AI painter, presented in a paper pre-published on arXiv, has a face-to-face talk with human users to learn more about their unique qualities and feelings. It then uses this information to create portrait paintings that best reflect the user's traits.
"Our research lab uses a cognitive science approach to AI modeling and takes on very human cognitive notions to model AI, including human creativity, human empathy and emotions," Steve DiPaola, a professor for the school of Interactive Arts & Technology (SIAT) at Simon Fraser University, told TechXplore.
The iViz Lab conducts studies focusing on two distinct research areas, namely embodied conversation agents (ECA) and AI creativity. Ph.D. students at SFU who work in the lab are typically assigned to one of two groups: They either work on creating AI 3-D character agents that can communicate with humans or on creating AI-based artistic computational models. The AI system developed by DiPaola and his colleagues Nilay Yalcin and Nouf Abukhodair combines these two research areas in a unique and innovative way.
"We realized that we can combine our two research groups by having the empathetic ECA avatar we developed interview a user about what they felt about a conference, get an emotion read on their personality (using the Big 5 personality modeling) and pass that evaluation to our AI creativity system that then creates a portrait that best fits them," DiPaola said. "People watching our system in action felt that the portrait truly fit the personality of those who went up to talk with our AI avatar."
Most creative AI systems developed in the past are based on a single computational method. In contrast, the creative AI system developed by DiPaola and his colleagues employs a variety of different techniques, which combined can artificially replicate the process through which human artists paint portraits.
The researchers trained a modified version of DeepDream, a computer vision model developed at Google, on an art database they compiled. In addition, they combined an algorithm known as Deep Style with techniques that divide images into segments, which allows their system to combine different art styles and processes, applying them to different regions of a painting. Finally, they used a particle system, a technique often used in computer-based graphics design, to perform the actual brushwork that ultimately creates the painting.
"On the ECA side, we use our own natural language processing (NLP) systems combined with other AI systems to note the emotions conveyed by the face of the user that is talking to it in real time," DiPaola said. "This allows our system to get a sense of the person by processing the amount of stress in their voice, the emotions on his/her face, all in sync with the words they are saying."
The conversational agent devised by DiPaola and his colleagues has face-to-face dialogs with humans via a realistic 3-D avatar. This avatar can communicate in a natural way, accompanying its words with hand gestures and facial expressions.
The agent is also backed by an on-the-fly empathy model that allows it to appear more sincere and tailors its responses to match the needs, emotions and traits of individual users. In the past, the researchers proposed the use of this AI-powered 3-D avatar in healthcare settings or to provide virtual coaching. In their recent work, however, they used it to gather information about a person's traits and emotions that could then be used to create more expressive portraits.
"We were able to successfully extract emotional and personality traits of an individual during conversations with our avatar and map these to a uniquely different AI space: that of giving color, style and texture to a portrait," DiPaola said. "Remarkably, most users felt that our system's paintings effectively represented who they are."
The researchers presented their system to the public at the NeurIPS 2019 Conference in Vancouver. Those attending the conference could take part in demonstration sessions where the AI system interacted with them and then created portraits that reflected what they learned about the subject in real time. Most of those who had their portrait taken by the AI system were impressed by the result, as they felt that it reflected their personality and emotions well.
The recent work by DiPaola and his colleagues highlights the interesting possibilities associated with the use of AI in creative and artistic disciplines. The AI creative systems they developed have already been showcased at art exhibitions worldwide.
In the future, the new AI system could inspire others to explore ways of merging advanced computational techniques and art. Meanwhile, the researchers are also collaborating with several companies to introduce their ECA avatar within healthcare settings, for instance, adapting so it can serve as a mental health coach, as a home assistant for the elderly, or as an agent that supports people who are trying to settle in a new country.
"With the AI creativity system, we have now started exploring word-based creativity, such as the production of poems and prose," DiPaola said. "We are working on making our avatars more human-like in what they say, while also trying to innovate the field of art and creativity using AI. Our visuals (both still and animated) were presented at several major art shows, including some at the MoMA (Museum of Modern Art) in NYC and Whitney museums."
Jeremy Owen Turner et al. Integrating Cognitive Architectures into Virtual Character Design, (2016). DOI: 10.4018/978-1-5225-0454-2
iVizLab Research Lab: ivizlab.sfu.ca
DiPaola Portfolio: www.dipaola.org/art
AI Research on Empathetic Painter: ivizlab.org/research/ai_empathetic_pianter/
AI Affective Virtual Human: ivizlab.org/research/ai-affective-virtual-human/
© 2020 Science X Network