Scientists at the University of Twente in the Netherlands have devised a new type of electronic chip that takes after the human brain. Their device is highly power-conscious, massively parallel, and can manipulate data in arbitrary ways – even though it doesn't need to be explicitely designed to perform any task. The advance could pave the way for computers that think more like we do.
(When the) chips are down
Electronic chips as they are currently designed come with plenty of drawbacks. Even the simplest operations, like adding or subtracting, need large numbers of transistors arranged in a very specific, well thought-out pattern. These transistors quickly add up and drain power even when idle (unless specific measures are taken). Moreover, most circuits can't effectively process information in parallel, leading to a further waste of time and energy.
All of these factors make it especially hard for today's computers to perform many crucial tasks quickly and on little power – particularly the kinds of tasks that the human brain can tackle with ease, like recognizing a visual pattern or understanding human language. In fact, when it comes to simulating brain-like functionality, many researchers have opted to abandon traditional computer architectures altogether.
Alternative chip designs that try to mimic the prowess and efficiency of the brain usually do so by either resorting to massive parallelism or by using neuron-like structures as their basic building blocks. But these approaches retain one big drawback: they still rely on fallible human minds to design their hardware and software.
No design needed
A research team led by professors Wilfried van der Wiel and Hajo Broersma is walking a different path. Remarkably, their device takes people out of the the circuit design equation altogether and leaves it to the chip itself to figure out how to best manipulate its inputs into the desired output through artificial evolution.
Their proof-of-principle device consists of a network of up to 100 densely interconnected gold nanoparticles, 20 nanometers in size, each acting as a tiny transistor. The researchers call it a natural computer, because the way in which the particles are interlinked is reminiscent of neural networks in the brain. Unlike in a standard circuit, the scientists themselves don't exactly know how the transistors in a natural computer connect with each other – but this doesn't stop the circuit from working as intended.
Here's, in a nutshell, how it all works. The nanoparticle cluster can be thought of as a black box containing two input signals, one output signal, and six control voltages that are fed to the edge of the cluster and affect how the network elaborates its inputs. Rather than designing the entire circuit by hand, the researchers can simply look for the exact combination of control voltages that process all the inputs into the correct outputs.
"Although we understand the generic physics principles underlying the cluster’s behavior, we do not know the actual current paths in the network on a nanoscale level," van der Weil told Gizmag.
Using "designless" systems such as this means that costly design mistakes are avoided, and in fact this approach can work around, or even take advantage of, material defects like crosstalk that must be avoided at all costs in conventional electronics.
Even better, if the chip were to sustain heavy damage, it could still be reconfigured just as easily by adjusting the voltage values for the signals that control how the nanoparticle cluster manipulates the data.
Natural selection and "baby voltages"
Selection of the optimal values for the six control voltages is not done manually or even by brute force, but instead by using something called a genetic algorithm (GA). In computer science, this is a clever (and well-known) way to optimize a set of parameters for an arbitrary condition. Just like the idea behind the circuit itself, GAs also take shameless inspiration from nature.
The basic idea behind GAs is to create successive "generations" of values by "marrying" them to each other (by combining them and adding an element of randomness), and then seeing which of the "offspring" is the best fit. This is in obvious analogy to the way in which natural selection picks, through successive generations, the optimal gene pool that is best adapted to live in the surrounding environment.
Though this process is more complicated than mere bruteforcing, it's also much faster. In this case, the researchers say the process takes less than an hour (as opposed to several days) to find all of the six voltage parameters for a set condition.
Optimizing the voltages for different functionalities, the researchers were able to indirectly "program" the chip to act as any one of the Boolean logic gates. This was done using a number of nanoparticles comparable to the number of transistors that would be needed to implement the same function on a standard chip.
Toward brain-like computing
According to the researchers, this structure has great potential for saving energy compared to a standard chip:
"Natural computers have, in general, the promise to be more energy efficient," van der Weil told Gizmag. "The promise for lower energy consumption is based on the fact that natural computers take more advantage of the computational power of matter than conventional computers, and that many computational processes occur in parallel as opposed to sequentially as in conventional computers."
The researchers also believe their device holds real promise with respect to complex tasks involving pattern recognition that the human brain excels at:
"In our device many operations can occur in parallel, analogous to information processing in the brain," van der Weil continued. "We know that the brain, but also mathematical neural networks, are very suitable for pattern recognition. We believe that our system can be a physical realization of a cellular neural network with the accompanying advantages compared to conventional circuits."
Their invention doesn't come without drawbacks, however. One being that the system can currently only operate at temperatures close to absolute zero – though the scientists tell us this can be easily fixed.
"Our approach only works if the nanoparticles act as single-electron transistors and relies on Coulomb blockade," says van der Wiel. "For 20 nm nanoparticles this means that we have to be at temperatures lower than 5 Kelvin (-268° C, - 451° F) or so. However, there is no fundamental reason why our approach should not work at room temperature. This would require smaller nanoparticles."
A more complex circuit may need to use clusters of 100 nanoparticles as its elementary block, since optimizing larger nanoparticle networks through signals that only reach its periphery could be difficult.
"We either need to go to interconnecting smaller networks, or to more advanced electrode configurations. This is work in progress," says van der Weil.
The next steps for the team's research will be to "create larger networks with more complex functionality, preferably at room temperature."
The findings have been published on the journal Nature Nanoelectronics.
Source: University of Twente
Back then the fastest windows platforms were rate as 350 MHz (remember those days?). My 7 MHZ Amiga was running circles around my Windows machine.
Imagine, in 1989 with only a single spin CD player you could take a realtime virtual tour of a building. The Amiga's main CPU sent the download-from-CD commands to the I/O chip, the information was decoded and sent to the graphics output chip, while simultaneously the dedicated I/O chip was again downloading and processing the next bit. During this time the main CPU was pretty much free to wait for more tasks.
Compartmentalized chip architecture made for the only true multitasking yet seen. Chips did not need depend as much on speed for tasks to be done since dedicated chips took care of the tasks freeing up other chips. It mimicked a brain by having dedicated I/O, graphics, and sound chips.
If today's computers could be designed as such with the incredible advancement we have made since the 80s in individual chip tech, the computers would almost be able to do what we want before we told them to.
This is how multitasking was truly started ad we have not seen its like since.