Here's something you might not know about foreign-language films ... when they're dubbed to English, the editors don't necessarily just go with the most literal translation. Instead, they observe the actors' lip movements, then choose English dialogue that at least somewhat matches up with those. Now, a team from Disney Research Pittsburgh and the University of East Anglia has developed a system that does so automatically, and that offers a wider range of suggested alternate phrases.

Facial movements associated with speech sounds are known as visemes. Traditionally when dubbing films, sequential still frames of an actor's visemes are studied. Based on the observed static lip shapes, writers figure out what sounds those match, then create English dialogue that incorporates those sounds in that order. Of course, that speech also has to have the same meaning as the original dialogue.

By contrast, the Disney/East Anglia system looks at whole moving sequences of visemes, or "dynamic visemes." Software then comes up with visually-matching phrases, offering a wider range of choices than would be possible just by analyzing static visemes.

This is due to the fact that a package of lip movements can be matched to a greater number of sound combinations than the sum of its static parts – the amount of syllables in the original dialogue and the alternate don't even need to be the same. As can be seen in the video below, for example, the dynamic viseme of the words "clean swatches" results in 9,658 alternate phrases, whereas its combined sequential static visemes only deliver 413.

That said, most of those 9,658 phrases are pretty nonsensical, along the lines of things like "she is to a scissor." Still, they at least give writers more of a sense of the direction in which they could head, in order to keep the words synced up with the mouths.