Deep neural network generates realistic character-scene interactions

Deep neural network generates realistic character-scene interactions
A selection of results using the researchers' method to generate scene interaction behaviors. Credit: SIGGRAPH Asia

A key part of bringing 3-D animated characters to life is the ability to depict their physical motions naturally in any scene or environment.

Animating characters to naturally interact with objects and the environment requires synthesizing different types of movements in a complex manner, and such motions can greatly differ not only in their postures, but also in their duration, contact patterns, and possible transitions. To date, most -based methods for user-friendly motion control have been limited to simpler actions or single motions, like commanding an animated character to move from one point to the next.

Computer scientists from the University of Edinburgh and Adobe Research, the company's team of research scientists and engineers shaping early-stage ideas into , have developed a novel, data-driven technique that uses to precisely guide by inferring a variety of motions—sitting in chairs, picking up objects, running, side-stepping, climbing through obstacles and through doorways—and achieves this in a user-friendly way with simple control commands.

The researchers will demonstrate their work, Neural State Machine for Character-Scene Interactions, at ACM SIGGRAPH Asia, held Nov. 17 to 20 in Brisbane, Australia. SIGGRAPH Asia, now in its 12th year, attracts the most respected technical and creative people from around the world in computer graphics, animation, interactivity, gaming, and emerging technologies.

To animate character-scene interactions with objects and the environment, there are two main aspects—planning and adaptation—to consider, say the researchers. First, in order to complete a given task, such as sitting in chairs or picking up objects, the character needs to plan and transition through a set of different movements. For example, this can include starting to walk, slowing down, turning around while accurately placing feet and interacting with the , before finally continuing to another action. Second, the character needs to naturally adapt the motion to variations in shape and size of objects, and avoid obstacles along its path.

"Achieving this in production-ready quality is not straightforward and very time-consuming. Our Neural State Machine instead learns the motion and required state transitions directly from the scene geometry and a given goal action," says Sebastian Starke, senior author of the research and a Ph.D. student at the University of Edinburgh in Taku Komura's lab. "Along with that, our method is able to produce multiple different types of motions and actions in high quality from a single network."

Using motion capture data, the researchers' framework learns how to most naturally transition the character from one movement to the next -for example being able to step over an obstacle blocking a doorway, and then stepping through the doorway, or picking up a box and then carrying that box to set on a nearby table or desk.

The technique infers the character's next pose in the scene based on its previous pose and scene geometry. Another key component of the researchers' framework is that it enables users to interactively control and navigate the character from simple control commands. Additionally, it is not required to keep all the original data captured, which instead gets heavily compressed by the network while maintaining the important content of the animations.

"The technique essentially mimics how a human intuitively moves through a scene or environment and how it interacts with objects, realistically and precisely," says Komura, coauthor and chair of at the University of Edinburgh.

Down the road, the researchers intend to work on other related problems in data-driven character animation, including motions where multiple actions can occur simultaneously, or animating close-character interactions between two humans or even crowds.

More information: sa2019.siggraph.org/

Provided by Association for Computing Machinery
Citation: Deep neural network generates realistic character-scene interactions (2019, October 29) retrieved 29 March 2024 from https://techxplore.com/news/2019-10-deep-neural-network-realistic-character-scene.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.

Explore further

Novel machine learning technique for simulating the every day task of dressing

9 shares

Feedback to editors