The GPU in your gaming rig performs crazy amounts of calculations to really bring to life the Cyberdemon in the new Doom, but scientists are increasingly applying that power to more academic pursuits. Russian physicists have put a computer running a consumer-level Nvidia GPU to work on equations that are normally performed using a powerful supercomputer, and found that the home PC solved them in 15 minutes – far faster than the supercomputer's time of two or three days.
A GPU is designed with multiple threads of processing power, which allows it to perform many more simultaneous calculations than a CPU. The researchers from the Lomonosov Moscow State University wanted to take advantage of that, and test whether consumer-level tech would make an accessible alternative to supercomputers, in situations where many equations had to be run parallel to each other.
The GPU tackled few-body scattering equations, which describe how multiple quantum particles interact with each other. Where three or more of these bodies are involved, the equations become extremely difficult to calculate, involving a table containing tens or even hundreds of thousands of rows and columns of data. Running on Nvidia software as well as custom programs written by the researchers, the GPU performed better than expected.
"We reached a speed we couldn't even dream of," says team leader Vladimir Kukulin. "The program computes 260 million complex double integrals on a desktop computer within three seconds. No comparison with supercomputers! My colleague from the University of Bochum in Germany carried out the calculations using one of the largest supercomputers in Germany with the famous blue gene architecture, which is actually very expensive. And what took his group two or three days we do in 15 minutes without spending a dime."
In using consumer technology, the team's goal was to make these areas more accessible. Generally, only supercomputers are up to such tasks, and even then it's a time-consuming process. That means only a few groups around the world have the resources to perform these calculations, which hinders the overall progress of the fields of study related to them, including quantum mechanics and nuclear and atomic physics.
The processors used by the team retail for between US$300 – $500, which is far easier on the wallet than the hundreds of millions of dollars an institute can spend on a supercomputer. In fact, GPUs have been capable of this kind of application for the past 10 years or so, but their value is only now beginning to be appreciated.
"This work, in our opinion, opens up completely new ways to analyze nuclear and resonance chemical reactions," says Kukulin. "It can also be useful for solving a large number of computing tasks in plasma physics, electrodynamics, geophysics, medicine and many other areas of science. We want to organize a kind of training course, where researchers from various scientific areas of peripheral universities that do not have access to supercomputers could learn to do on their PCs the same thing that we do."
Does this mean that super computers do not use parallel processing, or is it just down to the programme?
Does it look like Universities have been sold a pup? This rather reminds me of some dodgy software packages sold to governments that turn out not to perform as specified.
Also does this mean that supercomputers, suitably programmed could now become super supercomputers?
It has already forever been known that GPU's excel at parallel computing. They consist of thousands of tiny cores that can do separate calculations. It doesn't mean supercomputer are to be replaced. It just means that certain types of calculations lend them to be done well on GPU's, and others not so.