Motion transfer from a source onto two target subjects. Credit: arXiv:1808.07371 [cs.GR]

A small team of researchers at UC Berkeley has used neural-networking software to create a program that copies the dance moves of one person to another—making it look like the second person is doing the dancing. The team, with members Caroline Chan, Shiry Ginosar, Tinghui Zhou and Alexei Efros, has written a paper describing their software and has posted it on the arXiv preprint server.

Over the past several months, scientific research surrounding using neural-networking technology to map the face of one person to the body of another has led to what has come to be known as "deepfake" videos—in many cases, people use an app to map the facial features of a celebrity onto the body of a porn actor, making it look like the celebrity is engaging pornography. In this new effort, the research group has extended the original research to include the entire body of a person—copying the movements of one person to another, making it appear as if the second person is doing something they have never actually done—in this case, dancing. The team has posted a video demonstrating their software.

The researchers say their software transfers motion from one person to another. It works by making use of two neural (generative adversarial) networks, video of one person dancing, and video of a second person moving around. The software starts by turning the person in the first into an animated stick figure. Next, the software turns the movements of the second person into a second animated stick figure. After that, the software swaps the movements between the stick figures and then the second stick figure is transformed into the second person, mimicking the dance moves of the first person. The researchers have also included additional software to make the dance moves look smooth and to ensure the face remains clear.

The researchers have not made clear the purpose of the , but it seems likely that it will be made into an app—one that allows people to make themselves appear to have professional moves in their own videos. It also seems possible that such an app could be used in other less innocent scenarios, as well.

More information: Everybody Dance Now, arXiv:1808.07371 [cs.GR] arxiv.org/abs/1808.07371

Abstract
This paper presents a simple method for "do as I do" motion transfer: given a source video of a person dancing we can transfer that performance to a novel (amateur) target after only a few minutes of the target subject performing standard moves. We pose this problem as a per-frame image-to-image translation with spatio-temporal smoothing. Using pose detections as an intermediate representation between source and target, we learn a mapping from pose images to a target subject's appearance. We adapt this setup for temporally coherent video generation including realistic face synthesis. Our video demo can be found at this https URL .

Journal information: arXiv