To keep up with the constant flood of data the digital world is generating, computer manufacturers have thrown more and more processing cores at the problem – currently, the world's most powerful supercomputer boasts over 10 million cores. Forging a different path, Hewlett Packard Enterprise (HPE) has overhauled computing architecture to put memory at the center of the system, showcasing it through a prototype it calls "The Machine".
Current computers are built around a series of processors, which will dedicate themselves to one task at a time, although they can be further divided into more cores and threads to help them multi-task. But with each processor relying largely on its own little pockets of memory, there's a lot of wasted time and energy as they try to talk to each other, and even more as data is shuffled between memory and storage. Improvements are constantly being made to speed things up, but there's an inevitable bottleneck to this fractured architecture.
NEW ATLAS NEEDS YOUR SUPPORT
Upgrade to a Plus subscription today, and read the site without ads.
It's just US$19 a year.UPGRADE NOW
The Machine project scrapped the existing system and started again, with an eye towards crunching big data. HPE put what it calls Memory-Driven Computing at the heart of the new architecture, giving all the processors in the system equal access to a single shared pool of memory.
The current prototype shares a whopping 160 TB between 40 nodes, making it the largest single-memory system in the world. Using non-volatile memory (NVM), the data is processed and stored in the same place, eliminating the need to send it to different parts of the system. And rather than each component communicating through its own interconnect system, Memory-Driven Computing uses a universal protocol, which makes The Machine more efficient and potentially modular.
Communication between the nodes is also sped up thanks to photonics. Instead of sending information in the form of electrons moving through copper wires, photonics allows the system to transmit light through optic fiber ribbons, making the interconnects smaller, faster, cooler and more energy efficient.
While it's already an impressive system, The Machine is scalable, and HPE says future versions could be capable of an absolutely mind-boggling 4,096 yottabytes (YB) – for reference, 1 YB is more than 1 trillion TB. That's access to an incredible amount of data simultaneously, and HPE believes such a system has a future in data centers and space travel.
"We believe Memory-Driven Computing is the solution to move the technology industry forward in a way that can enable advancements across all aspects of society," says Mark Potter, CTO at HPE. "The architecture we have unveiled can be applied to every computing category — from intelligent edge devices to supercomputers."
The Machine team is still looking into ways to improve the photonics, software and security of the system. The scientists describe the project in the video below.
Source: Hewlett Packard Enterprise