If you've ever had to endure a diagnostic session in a magnetic resonance (MRI) machine, you know that lying motionless for up to 45 minutes can be uncomfortable at best. Add in the countless ear-ringing thumps, bangs and knocks and you have a procedure that begs for any sort of abbreviation. Thanks to a new algorithm developed by an MIT research team, the time spent in that claustrophobic tube may soon be appreciably shortened, without much loss of accuracy.
MRI machines take advantage of the fact that the human body is largely composed of water. Very simply put, a carefully timed blend of strong magnetic fields and radio frequencies causes partial alignment of these water molecules. When the fields are shut off, the molecules in different tissues re-orient to their original positions and give off detectable signals which can be assembled into a finely detailed image.
NEW ATLAS NEEDS YOUR SUPPORT
Upgrade to a Plus subscription today, and read the site without ads.
It's just US$19 a year.UPGRADE NOW
The procedure wouldn't be so onerous if only one image were being captured, but the process requires multiple scans to assemble a detailed representation of the target region. The larger the region being scanned, the longer the patient must remain perfectly-still in the machine, sometimes for nearly three-quarters of an hour. Now, a group at MIT's Research Laboratory of Electronics has developed an algorithm that could reduce the lengthiest scans to just 15 minutes.
Team leaders Elfar Adalsteinsson and Vivek Goyal explain that their new time-saving algorithm functions by applying data from the first scan in the construction of successive images. This frees the scanner from having to go back to square one each time it fabricates a new image, in essence providing a rough sketch or outline that appreciably reduces the time required to generate subsequent scans.
"To create this outline, the software looks for features that are common to all the different scans, such as the basic anatomical structure," Adalsteinsson says. "If the machine is taking a scan of your brain, your head won't move from one image to the next, so if scan number two already knows where your head is, then it won't take as long to produce the image as when the data had to be acquired from scratch for the first scan."
To maintain accuracy, "the algorithm cannot impose too much information from the first scan onto the subsequent ones," Goyal points out. "You don't want to presuppose too much, as this would risk losing the unique tissue features revealed by the different contrasts."
Despite the promise of shorter scan times, there are still a few kinks that need to be ironed out. Image quality does suffer a bit, and crunching the data created takes quite a bit longer. To work on the latter issue, the team has borrowed technology from the gaming world in an effort to speed up calculation time. "Graphics processing units, or GPUs, are orders of magnitude faster at certain computational tasks than general processors, like the particular computational task that we need for this algorithm," Adalsteinsson says.
"This work is potentially of high significance because it applies to routine clinical MRI, among other applications," says Dwight Nishimura, the director of the Magnetic Resonance Systems Research Laboratory at Stanford University. "Ultimately, their approach might enable a substantial reduction in examination time."
Now if they can just make the whole process quieter.
Source: MITView gallery - 7 images