Computers

Disney software lets directors change actors' expressions in post-production

Disney software lets directors change actors' expressions in post-production
FaceDirector takes prerecorded shots (top) and combines the actor's different facial expressions from each one
FaceDirector takes prerecorded shots (top) and combines the actor's different facial expressions from each one
View 1 Image
FaceDirector takes prerecorded shots (top) and combines the actor's different facial expressions from each one
1/1
FaceDirector takes prerecorded shots (top) and combines the actor's different facial expressions from each one

Already, audio engineers can use software such as Pro Tools to change the inflection of a person's voice after it's been recorded. Soon, however, movie directors may likewise be able to alter an actor's facial expressions after their performance has been shot. They could do so using FaceDirector, a program created through a collaboration between Disney Research Zurich and the University of Surrey.

First of all, FaceDirector doesn't generate expressions from scratch. Instead, it morphs between different expressions that were previously recorded in separate takes of the same shot.

As a hypothetical example of how it could be used, a director might start by having an actor deliver the same set of lines twice – once looking scared, and once looking angry. In editing, however, it might be realized that what would work best is if the shot started out with the actor looking scared, but then getting angry as they went along.

Well, using FaceDirector, it would be possible to start with the "scared" take, then seamlessly transition to the "angry" take within the same continuous shot. The audio for each take would remain paired with its corresponding video.

In order to keep the multiple takes synced with one another, the software searches for audio cues (such as unique word sounds) and "facial landmarks" that occur along with those cues. Using these markers, the playback speed of the takes is subtly tweaked in order to keep them at the same place at the same time, with any resulting changes in audio pitch automatically corrected. As a result, editors are able to subsequently morph back and forth between those takes, while retaining the timing of the scene.

Additionally, the initial footage can be captured using regular 2D cameras, with no need for any extra equipment.

Scenes edited with FaceDirector can be seen in the video below.

Source: Disney Research

FaceDirector: Continuous Control of Facial Performance in Video

1 comment
1 comment
worf2
a way closer from speech synthesis to acting synthesis.