Computers

3D animation tech puts other peoples' words in celebrities' mouths

View 2 Images
The technique makes it look like words said by George Bush are actually being spoken by other people
University of Washington
A visual breakdown of Daniel Craig and Tom Hanks images as taken from the internet
University of Washington
The technique makes it look like words said by George Bush are actually being spoken by other people
University of Washington

Researchers at the University of Washington (UW) recently demonstrated how 3D video images of Tom Hanks, Daniel Craig and several other celebrities could be created by piecing together still images and sound bites retrieved from the internet. They also showed how their algorithms could animate those digital models, getting them to say things that were actually said by someone else.

The technology relies on advances in 3D face reconstruction, tracking, alignment, multi-texture modeling and puppeteering, which were developed by a research group led by UW assistant professor of computer science and engineering Ira Kemelmacher-Shlizerman.

The UW research team's latest advances include the ability to transfer expressions and the way a particular person speaks onto the face of someone else — for instance, mapping former president George W. Bush's mannerisms onto the faces of other politicians and celebrities. In the demonstration video below, Bush is seen speaking, but the words are made to look like they're coming from a number of different celebrities.

The machine learning algorithms developed by the UW researchers mined a minimum of 200 internet images taken over time in various scenarios and poses — a process known as learning "in the wild."

On the face of it, it would seem that in the wrong hands the technology could eventually make it look like words said by Donald Trump were actually being spoken by President Obama or vice versa – as just one example. The UW researchers' goal, however, is far less nefarious. They hope to eventually allow family members to interact with an interactive model of a distant relative by creating a fully interactive, three-dimensional digital persona of them from family photo albums and videos, historic collections or other content.

As virtual and augmented reality technologies develop, they envision that their 3D approach would replace current 2D approaches like Skype or Facetime.

"You might one day be able to put on a pair of augmented reality glasses and there is a 3D model of your mother on the couch," says Kemelmacher-Shlizerman. "Such technology doesn't exist yet — the display technology is moving forward really fast — but how do you actually re-create your mother in three dimensions?".

Projecting even farther into the future of entertainment, the UW researchers point out that this could also eventually replace the current process of how detailed digital movie characters are created. To create a fictitious character like Benjamin Button for instance, every angle and movement of Brad Pitt playing that character had to be recorded in a controlled setting. Those images then had to be edited together to create the final character that appears to move and talk seamlessly as an old man.

The UW researchers will present the results of their project in an open access paper at the International Conference on Computer Vision in Chile on Dec. 16.

Source: University of Washington

  • Facebook
  • Twitter
  • Flipboard
  • LinkedIn
2 comments
worf2
i am waiting for the first movie using this technique, with eg. errol flynn, hedy lamarr and other dead actors in cast.
warren52nz
Oh good. Now they can get Dubbya to say something intelligent.