If you're not satisfied with the realism of computer-generated animals in movies and games, then you might be interested in the latest news out of the University of California San Diego. Working with colleagues from UC Berkeley, computer scientists there have developed a new method of rendering fur that's reportedly much more accurate than existing techniques.
Currently, fur is simulated in the same manner as is human hair – the computer utilizes a model that follows a ray of light as it bounces from one fur fiber to another. The technique requires a lot of processing power, and takes a long time.
The problem is, fur isn't the same as hair. Fur fibers have a much larger medulla (central cylinder), that scatters the light in a way that hair doesn't. This is one of the main reasons that fur and hair look different from one another.
With that in mind, the researchers turned to a well-understood concept known as subsurface scattering. It describes the fashion in which light enters the surface of a translucent object at one point, scatters at various angles, and then exits the object at another point.
Using a neural network, the team created an algorithm that applies subsurface scattering to the rendering of fur. After being trained on just a single scene, the network was able to apply the concept to every scene with which it was presented.
The resulting simulations are not only said to be more realistic than those created using state-of-the-art traditional methods, but the technique is also ten times faster.