Although movie and game producers can now create computer-animated images of just about anything, the sounds made by those onscreen items still typically consist of recordings of real-world objects. That may not be the case for much longer, however, thanks to a computer system developed at Stanford University.
There are a few drawbacks to using actual physical objects for creating such sound effects. For one thing, producers need to acquire the necessary items, and then take the time to record them as they get knocked together, dropped on the ground, or whatnot. In editing, the audio recordings then have to be painstakingly synced up to the movements of the animated images.
Additionally, the items depicted onscreen sometimes have no direct equivalent in the real world, making it challenging to create appropriate sound effects.
That's where Stanford's new Integrated Wavesolver system comes in. Users input the geometry of the computer-generated object, along with parameters such as the density and stiffness of the depicted material, and the action that's being performed. The system responds by figuring out how a real-world object with those characteristics would vibrate in that situation, and how those vibrations would excite sound waves.
What results are sound effects that are not only realistic, but that are also already synced up to the animated movements. The system is additionally able to account for the manner in which the sound waves created by each object would bend, bounce or deaden when interacting with other objects in the same scene – it presently doesn't account for room acoustics, however.
The researchers are now working on methods of speeding up the process, which currently takes some time to produce results. Once perfected, the technology could also be used by engineers to see what conceptual products would sound like in use, before being physically produced.
"Currently there exists no way to generate realistic synchronized sounds for complex animated content, such as splashing water or colliding objects, automatically," says Stanford's Prof. Doug James. "This fills that void."
You can see and hear some examples of the system's work, in the video below.
Source: Stanford University